diff --git a/LICENSE b/LICENSE
index c2f1de61..3ed9a29f 100644
--- a/LICENSE
+++ b/LICENSE
@@ -1,4 +1,4 @@
-Copyright (c) 2019-2023 Gabriel Victor Herbert
+Copyright (c) 2019-2025 Gabriel Victor Herbert
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
diff --git a/README.md b/README.md
index 3468e1b9..41b25c72 100644
--- a/README.md
+++ b/README.md
@@ -1,61 +1,122 @@
-## REI3
-REI3 is an open low code application platform. It runs on almost any system, on-premise or in the cloud and is free to use for individuals and organizations.
-
-Applications are built with the integrated, graphical [Builder](https://rei3.de/en/docs) utility, after which they can be signed, exported, shared and/or sold. A growing range of free, production ready [business applications](https://rei3.de/en/applications) are publicly available.
-
-### ⭐ Features
-* Easy to install on Windows and Linux systems with very few dependencies.
-* Self-hosted or deployable to cloud systems as web-based service.
-* Usable free of charge, with no user limit.
-* Growing feature set for powerful applications:
- * Complex relationships, joined relation input forms, sub queries and so on.
- * Various frontend components, such as calendars, Gantt plans, color inputs, sliders and many more.
- * Powerful functions and business rules with general or per-record access control, database triggers and more.
- * Mobile views, with options to optimize frontend components for easier use on small screens.
- * Sending and receiving mails with attachments.
- * PDF generation.
- * ICS calendar access.
- * Multi-language support.
- * Multi factor authentication.
-* For enterprise environments:
- * LDAP import for user logins and access permissions.
- * Cluster management.
- * Customization of application colors, names, welcome messages and so on.
-
-## :ticket: Community
-***New!*** We just created a new forum to serve as an official site for REI3 discussions. Feel free to browse or sign-up to post questions, requests, issues and feedback. You can find the new forum at [community.rei3.de](https://community.rei3.de).
-
-## 📀 How to install
-REI3 is easy to setup, with a graphical installer and portable version on Windows, packages for Linux systems as well as a compose file for Docker environments.
-
-To get a full step-by-step manuel, visit the [admin documentation](https://rei3.de/en/docs/admin). It also includes details about different deployment options and system requirements.
-
-## 💡 How to build applications
-All versions of REI3 include the graphical Builder utility, which you can use to create or change applications. After installing REI3, you can enable the Builder inside the system configuration page. The maintenance mode must be enabled first, which will kick all non-admin users from the system while changes are being made.
-
-For information about how to use the Builder, please visit the [Builder documentation](https://rei3.de/en/docs/builder).
-
-## 📑 How to create your own version of REI3
-If you want to make changes to the REI3 platform itself, you can fork this repository or download the source code and then build your own executable.
+
+
REI3®
+Free and open low code
Build and host powerful applications with full control and ownership
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ Live demo
+ -
+ News
+ -
+ Downloads
+ -
+ Documentation
+ -
+ Applications
+
+
+Free yourself from walled gardens and cloud-only SaaS offerings. REI3 enables powerful low code applications, selfhosted in the cloud or on-premise. Create and then use, share or even sell your REI3 applications.
+
+
+
+
+## :star: Features
+* **Fast results**: Quickly replace spreadsheet based 'solutions' with proper multi-user applications.
+* **It can count**: Summarize records, do date calculations, apply business rules and much more.
+* **Make things visible**: Show tasks on Gantt charts, generate diagrams or display information-dense lists.
+* **Workflows included**: Adjust forms based on the current state of a record, export to PDF or send notifications.
+* **Compliance tools**: With roles and access policies, REI3 can give and restrict access globally or for specific records.
+* **End-to-end encryption**: Built-in support for E2EE - easy to use with integrated key management features.
+* **Integration options**: REI3 can serve as and call REST endpoints, create or import CSV files and offer ICS for accessing calendars.
+* **Ready for mobile**: Works well on all devices, with specific mobile settings and PWA features for great-feeling apps.
+* **Fulltext search**: Users can quickly find desired content by using search phrases and language specific lookups.
+* **Many inputs available**: From simple date ranges, to drawing inputs for signatures, to bar- & QR code inputs that can scan codes via camera - REI3 offers a growing list of input types for various needs.
+* **Blazingly fast**: REI3 takes advantage of multi-core processors and communicates with clients over bi-directional data channels.
+* **Security features**: Apply password policies, block brute-force attempts and enable MFA for your users.
+* **Fully transparent**: Directly read and even change data in the REI3 database - everything is human-readable.
+* **Selfhosted**: Run REI3 as you wish, locally or in the cloud - with full control on where your data is located.
+* **Enterprise-ready**: Adjust REI3 to your corporate identity, manage users & access via LDAP and grow with your organization by extending applications and clustering REI3.
+
+
+
+
+
+## :rocket: Quickstart
+### Linux
+1. Extract the REI3 package ([x64](https://rei3.de/latest/x64_linux)/[arm64](https://rei3.de/latest/arm64_linux)) to any location (like `/opt/rei3`) and make the binary `r3` executable (`chmod u+x r3`).
+1. Copy the file `config_template.json` to `config.json` and fill in details to an empty, UTF8 encoded Postgres database. The DB user needs full permissions to this database.
+1. Install optional dependencies - ImageMagick & Ghostscript for image and PDF thumbnails (`sudo apt install imagemagick ghostscript`), PostgreSQL client utilities for integrated backups (`sudo apt install postgresql-client`).
+1. Register (`sudo ./r3 -install`) and start REI3 with your service manager (`sudo systemctl start rei3`).
+### Windows
+1. Setup the standalone version directly on any Windows Server with the [installer](https://rei3.de/latest/x64_installer).
+1. Optionally, install [Ghostscript](https://www.ghostscript.com/) on the same Windows Server for PDF thumbnails.
+
+Once running, REI3 is available at https://localhost (default port 443) with both username and password being `admin`. For the full documentation, visit [rei3.de](https://rei3.de/en/docs).
+
+If you plan to run REI3 behind a proxy, please make sure to disable client timeouts for websockets. More details [here](https://rei3.de/en/docs/admin#proxies).
+
+There are also Docker Compose files ([x64](https://rei3.de/docker_x64)/[arm64](https://rei3.de/docker_arm64)) and a [portable version](https://rei3.de/latest/x64_portable) for Windows available to quickly setup a test or development system.
+
+## :bulb: Where to get help
+You can visit our [community forum](https://community.rei3.de) for anything related to REI3. The full documentation is available on [rei3.de](https://rei3.de/en/docs), including documentation for [admins](https://rei3.de/en/docs/admin) and [application authors](https://rei3.de/en/docs/builder) as well as [Youtube videos](https://www.youtube.com/channel/UCKb1YPyUV-O4GxcCdHc4Csw).
+
+## :clap: Thank you
+REI3 would not be possible without the help of our contributors and people using REI3 and providing feedback for continuous improvement. So thank you to everybody involved with the REI3 project!
+
+[](https://github.com/r3-team/r3/stargazers)
+
+REI3 is built on-top of amazing open source software and technologies. Naming them all would take pages, but here are some core libraries and software that helped shape REI3:
+* [Golang](https://golang.org/) to enable state-of-the-art web services and robust code even on multi-threaded systems.
+* [PostgreSQL](https://www.postgresql.org/) for powerful features and the most reliable database management system we´ve ever had the pleasure to work with.
+* [Vue.js](https://vuejs.org/) to provide stable and efficient frontend components and to make working with user interfaces fun.
+
+## :+1: How to contribute
+Contributions are always welcome - feel free to fork and submit pull requests.
+
+REI3 follows a four-digit versioning syntax, such as `3.2.0.4246` (MAJOR.MINOR.PATCH.BUILD). The major release will stay at `3` indefinitely, while we introduce new features and database changes with each minor release. Patch releases primarily focus on fixes, but may include small features as long as the database is not changed.
+
+The branch `main` will contain the currently released minor version of REI3; patches for this version can directly be submitted for the main branch. Each new minor release will use a separate branch, which will merge with `main` once the latest minor version is released.
+
+## :nut_and_bolt: Build REI3 yourself
+If you want to build REI3 itself, you can fork this repo or download the source code to build your own executable. The master branch contains the current minor release, while new minor releases are managed in new branches.
1. Install the latest version of [Golang](https://golang.org/dl/).
-1. Choose the source code for the version you want to build - usually that´s the master branch, but you can also choose any released version (as in `2.5.1.2980`).
1. Go into the source code directory (where `r3.go` is located) and execute: `go build -ldflags "-X main.appVersion={YOUR_APP_VERSION}"`.
* Replace `{YOUR_APP_VERSION}` with the version of the extracted source code. Example: `go build -ldflags "-X main.appVersion=2.5.1.2980"`
* You can change the build version anytime. If you want to upgrade the major/minor version numbers however, you need to deal with upgrading the REI3 database (see `db/upgrade/upgrade.go`).
* By setting the environment parameter `GOOS`, you can cross-compile for other systems (`GOOS=windows`, `GOOS=linux`, ...).
- * Since REI3 2.5, static resource files (HTML, JS, CSS, etc.) are embedded into the binary during compilation - so changes to these files are only reflected after you recompile. Alternatively, you can use the `-wwwpath` command line argument to load REI3 with an external `www` directory, in which you can make changes directly.
+ * Static resource files (HTML, JS, CSS, etc.) are embedded into the binary during compilation - so changes to these files are only reflected after you recompile. Alternatively, you can use the `-wwwpath` command line argument to load REI3 with an external `www` directory, in which you can make changes directly.
1. Use your new, compiled binary of REI3 to replace an already installed one.
-1. You are now running your own version of REI3.
-
-## 📇 Technologies
-The REI3 server application is built on [Golang](https://golang.org/) with the frontend primarily based on [Vue.js](https://vuejs.org/). By using modern web standards, REI3 applications run very fast (cached application schemas, data-only websocket transfers) and can optionally be installed as progressive web apps (PWA) on client devices.
+1. You can now start your own REI3 version. Make sure to clear all browser caches after creating/updating your own version.
-REI3 heavily relies on [PostgreSQL](https://www.postgresql.org/) for data management, storage and backend functions.
+## :page_with_curl: Copyright, license & trademark
+REI3© 2019-2025 Gabriel Victor Herbert
-## 👏 How to contribute
-Contributions are always welcome - feel free to fork and submit pull requests.
+The REI3 source code is released under the [MIT license](https://opensource.org/license/mit).
-REI3 follows a four-digit versioning syntax, such as 3.2.0.4246 (MAJOR.MINOR.PATCH.BUILD). The major release will stay at 3 indefinitely, while we introduce new features and database changes with each minor release. Patch releases primarily focus on fixes, but may include small features as long as the database is not changed.
-
-The branch `main` will contain the currently released minor version of REI3; patches for this version can directly be submitted for the main branch. Each new minor release will use a separate branch, which will merge with `main` once the latest minor version is released.
+REI3® is a registered trademark (class 42, number 30 2024 242 850). While the source code is open, we protect the name to differentiate our releases and services around REI3. If you intend to release third party extensions or versions of REI3 itself, please [get in contact](https://leansw.de/en/contact) with us to avoid issues with the REI3 trademark.
diff --git a/backup/backup.go b/backup/backup.go
index d69a32c4..0af22879 100644
--- a/backup/backup.go
+++ b/backup/backup.go
@@ -7,12 +7,11 @@ import (
"os"
"os/exec"
"path/filepath"
- "r3/compress"
"r3/config"
"r3/log"
"r3/tools"
+ "r3/tools/compress"
"r3/types"
- "strconv"
"sync"
)
@@ -164,7 +163,7 @@ func jobBackup(tocFile *types.BackupTocFile, jobName string) error {
// database backup
dbPath := filepath.Join(jobDir, subPathDb)
- if err := os.MkdirAll(dbPath, 0700); err != nil {
+ if err := os.MkdirAll(dbPath, 0755); err != nil {
return err
}
if err := dumpDb(dbPath); err != nil {
@@ -196,14 +195,8 @@ func jobBackup(tocFile *types.BackupTocFile, jobName string) error {
}
// update TOC file
- _, _, appBuild, _ := config.GetAppVersions()
- appBuildInt, err := strconv.Atoi(appBuild)
- if err != nil {
- return err
- }
-
tocFile.Backups = append(tocFile.Backups, types.BackupDef{
- AppBuild: appBuildInt,
+ AppBuild: config.GetAppVersion().Build,
JobName: jobName,
Timestamp: newTimestamp,
})
@@ -219,6 +212,7 @@ func dumpDb(path string) error {
args := []string{
"-h", config.File.Db.Host,
"-p", fmt.Sprintf("%d", config.File.Db.Port),
+ "-d", config.File.Db.Name,
"-U", config.File.Db.User,
"-j", "4", // number of parallel jobs
"-Fd", // custom format, to file directory
diff --git a/bruteforce/bruteforce.go b/bruteforce/bruteforce.go
index 5c15d796..5bd57ea3 100644
--- a/bruteforce/bruteforce.go
+++ b/bruteforce/bruteforce.go
@@ -9,7 +9,7 @@ import (
)
var (
- access_mx sync.Mutex
+ access_mx sync.RWMutex
attempts int = 100 // max allowed failed attempts before block
enabled bool = false // enable bruteforce protection
@@ -30,8 +30,8 @@ func SetConfig() {
// returns counts of tracked and blocked hosts
func GetCounts() (int, int) {
- access_mx.Lock()
- defer access_mx.Unlock()
+ access_mx.RLock()
+ defer access_mx.RUnlock()
return len(hostMapTracked), len(hostMapBlocked)
}
@@ -47,9 +47,8 @@ func Check(r *http.Request) bool {
// like Check() but with host string instead of http.Request
func CheckByHost(host string) bool {
-
- access_mx.Lock()
- defer access_mx.Unlock()
+ access_mx.RLock()
+ defer access_mx.RUnlock()
if !enabled {
return false
@@ -100,7 +99,6 @@ func BadAttemptByHost(host string) {
}
func ClearHostMap() error {
-
access_mx.Lock()
defer access_mx.Unlock()
diff --git a/cache/cache_access.go b/cache/cache_access.go
index bcb7c913..3bf893ba 100644
--- a/cache/cache_access.go
+++ b/cache/cache_access.go
@@ -1,79 +1,110 @@
package cache
import (
+ "context"
"errors"
+ "fmt"
"r3/db"
"r3/types"
"sync"
"github.com/gofrs/uuid"
+ "github.com/jackc/pgx/v5"
)
var (
- access_mx sync.Mutex
+ access_mx sync.RWMutex
loginIdMapAccess = make(map[int64]types.LoginAccess) // access permissions by login ID
)
// get effective access for specified login
+// access cache is created when authentication occurs
+// if no access cache exists, authentication did not occur
func GetAccessById(loginId int64) (types.LoginAccess, error) {
-
if loginId == 0 {
return types.LoginAccess{}, errors.New("invalid login ID 0")
}
- access_mx.Lock()
- defer access_mx.Unlock()
+ access_mx.RLock()
+ defer access_mx.RUnlock()
- if _, exists := loginIdMapAccess[loginId]; !exists {
- if err := load(loginId); err != nil {
- return types.LoginAccess{}, err
- }
+ if accessMap, exists := loginIdMapAccess[loginId]; exists {
+ return accessMap, nil
}
- return loginIdMapAccess[loginId], nil
+ return types.LoginAccess{}, fmt.Errorf("missing access cache for login %d", loginId)
}
-// renew permissions for all cached logins
-func RenewAccessAll() error {
+// load access cache for one login
+func LoadAccessIfUnknown(loginId int64) error {
+ access_mx.RLock()
+ _, exists := loginIdMapAccess[loginId]
+ access_mx.RUnlock()
+ if exists {
+ return nil
+ }
+
+ ctx, ctxCanc := context.WithTimeout(context.Background(), db.CtxDefTimeoutSysTask)
+ defer ctxCanc()
+
+ tx, err := db.Pool.Begin(ctx)
+ if err != nil {
+ return err
+ }
+ defer tx.Rollback(ctx)
+
+ if err := load_tx(ctx, tx, loginId); err != nil {
+ return err
+ }
+ return tx.Commit(ctx)
+}
+
+// renew permissions for all known logins
+func RenewAccessAll_tx(ctx context.Context, tx pgx.Tx) error {
for loginId, _ := range loginIdMapAccess {
- if err := RenewAccessById(loginId); err != nil {
+ if err := RenewAccessById_tx(ctx, tx, loginId); err != nil {
return err
}
}
return nil
}
-// renew permissions for one login
-func RenewAccessById(loginId int64) error {
- access_mx.Lock()
- defer access_mx.Unlock()
-
- if _, exists := loginIdMapAccess[loginId]; !exists {
+// renew permissions for one known login
+func RenewAccessById_tx(ctx context.Context, tx pgx.Tx, loginId int64) error {
+ access_mx.RLock()
+ _, exists := loginIdMapAccess[loginId]
+ access_mx.RUnlock()
+ if !exists {
return nil
}
- return load(loginId)
+ return load_tx(ctx, tx, loginId)
}
// load access permissions for login ID into cache
-func load(loginId int64) error {
- Schema_mx.RLock()
- defer Schema_mx.RUnlock()
+func load_tx(ctx context.Context, tx pgx.Tx, loginId int64) error {
- roleIds, err := loadRoleIds(loginId)
+ roleIds, err := loadRoleIds_tx(ctx, tx, loginId)
if err != nil {
return err
}
+ Schema_mx.RLock()
+ defer Schema_mx.RUnlock()
+ access_mx.Lock()
+ defer access_mx.Unlock()
+
loginIdMapAccess[loginId] = types.LoginAccess{
- RoleIds: roleIds,
- Api: make(map[uuid.UUID]int),
- Attribute: make(map[uuid.UUID]int),
- Collection: make(map[uuid.UUID]int),
- Menu: make(map[uuid.UUID]int),
- Relation: make(map[uuid.UUID]int),
+ RoleIds: roleIds,
+ Api: make(map[uuid.UUID]int),
+ Attribute: make(map[uuid.UUID]int),
+ ClientEvent: make(map[uuid.UUID]int),
+ Collection: make(map[uuid.UUID]int),
+ Menu: make(map[uuid.UUID]int),
+ Relation: make(map[uuid.UUID]int),
+ Widget: make(map[uuid.UUID]int),
}
for _, roleId := range roleIds {
- role, _ := RoleIdMap[roleId]
+ role := RoleIdMap[roleId]
// because access rights work cumulatively, apply highest right only
for id, access := range role.AccessApis {
@@ -90,6 +121,13 @@ func load(loginId int64) error {
loginIdMapAccess[loginId].Attribute[id] = access
}
}
+ for id, access := range role.AccessClientEvents {
+ if _, exists := loginIdMapAccess[loginId].ClientEvent[id]; !exists ||
+ loginIdMapAccess[loginId].ClientEvent[id] < access {
+
+ loginIdMapAccess[loginId].ClientEvent[id] = access
+ }
+ }
for id, access := range role.AccessCollections {
if _, exists := loginIdMapAccess[loginId].Collection[id]; !exists ||
loginIdMapAccess[loginId].Collection[id] < access {
@@ -111,15 +149,21 @@ func load(loginId int64) error {
loginIdMapAccess[loginId].Relation[id] = access
}
}
+ for id, access := range role.AccessWidgets {
+ if _, exists := loginIdMapAccess[loginId].Widget[id]; !exists ||
+ loginIdMapAccess[loginId].Widget[id] < access {
+
+ loginIdMapAccess[loginId].Widget[id] = access
+ }
+ }
}
return nil
}
-func loadRoleIds(loginId int64) ([]uuid.UUID, error) {
-
+func loadRoleIds_tx(ctx context.Context, tx pgx.Tx, loginId int64) ([]uuid.UUID, error) {
roleIds := make([]uuid.UUID, 0)
- rows, err := db.Pool.Query(db.Ctx, `
+ rows, err := tx.Query(ctx, `
-- get nested children of assigned roles
WITH RECURSIVE child_ids AS (
SELECT role_id_child
@@ -129,10 +173,12 @@ func loadRoleIds(loginId int64) ([]uuid.UUID, error) {
FROM instance.login_role
WHERE login_id = $1
)
+
UNION
- SELECT c.role_id_child
- FROM app.role_child AS c
- INNER JOIN child_ids AS r ON c.role_id = r.role_id_child
+
+ SELECT c.role_id_child
+ FROM app.role_child AS c
+ INNER JOIN child_ids AS r ON c.role_id = r.role_id_child
)
SELECT *
FROM child_ids
diff --git a/cache/cache_caption.go b/cache/cache_caption.go
index 23bca824..403e4953 100644
--- a/cache/cache_caption.go
+++ b/cache/cache_caption.go
@@ -1,5 +1,39 @@
package cache
+import (
+ "context"
+ "r3/config/captionMap"
+ "r3/types"
+ "sync"
+
+ "github.com/jackc/pgx/v5"
+ "github.com/jackc/pgx/v5/pgtype"
+)
+
+var (
+ caption_mx sync.RWMutex
+ captionMapCustom types.CaptionMapsAll // custom captions (for local instance)
+)
+
func GetCaptionLanguageCodes() []string {
- return []string{"en_us", "de_de", "it_it", "ro_ro"}
+ return []string{"en_us", "de_de", "ar_eg", "es_es", "fr_fr", "hu_hu", "it_it", "lv_lv", "ro_ro", "zh_cn"}
+}
+
+func GetCaptionMapCustom() types.CaptionMapsAll {
+ caption_mx.RLock()
+ defer caption_mx.RUnlock()
+ return captionMapCustom
+}
+
+func LoadCaptionMapCustom_tx(ctx context.Context, tx pgx.Tx) error {
+ cus, err := captionMap.Get_tx(ctx, tx, pgtype.UUID{}, "instance")
+ if err != nil {
+ return err
+ }
+
+ caption_mx.Lock()
+ captionMapCustom = cus
+ caption_mx.Unlock()
+
+ return nil
}
diff --git a/cache/cache_cluster.go b/cache/cache_cluster.go
index 4c4de131..fb24097d 100644
--- a/cache/cache_cluster.go
+++ b/cache/cache_cluster.go
@@ -1,44 +1,50 @@
package cache
import (
- "os"
+ "sync"
"github.com/gofrs/uuid"
)
var (
- hostname string
+ cluster_mx sync.RWMutex
isClusterMaster bool // node is cluster master, only one is allowed
nodeId uuid.UUID // ID of node, self assigned on startup if not set
nodeName string // name of node, self assigned on startup if not set, overwritable by admin
)
-func GetHostname() string {
- return hostname
-}
-func SetHostnameFromOs() error {
- var err error
- hostname, err = os.Hostname()
- return err
-}
-
+// is master
func GetIsClusterMaster() bool {
+ cluster_mx.RLock()
+ defer cluster_mx.RUnlock()
return isClusterMaster
}
func SetIsClusterMaster(value bool) {
+ cluster_mx.Lock()
+ defer cluster_mx.Unlock()
isClusterMaster = value
}
+// node ID
func GetNodeId() uuid.UUID {
+ cluster_mx.RLock()
+ defer cluster_mx.RUnlock()
return nodeId
}
func SetNodeId(value uuid.UUID) {
+ cluster_mx.Lock()
+ defer cluster_mx.Unlock()
nodeId = value
}
+// node name
func GetNodeName() string {
+ cluster_mx.RLock()
+ defer cluster_mx.RUnlock()
return nodeName
}
func SetNodeName(value string) {
+ cluster_mx.Lock()
+ defer cluster_mx.Unlock()
nodeName = value
}
diff --git a/cache/cache_dict.go b/cache/cache_dict.go
new file mode 100644
index 00000000..adf48cdb
--- /dev/null
+++ b/cache/cache_dict.go
@@ -0,0 +1,36 @@
+package cache
+
+import (
+ "context"
+ "slices"
+ "sync"
+
+ "github.com/jackc/pgx/v5"
+)
+
+var (
+ dict []string // list of dictionaries for full text search, read from DB
+ dict_mx sync.RWMutex
+)
+
+func GetSearchDictionaries() []string {
+ dict_mx.RLock()
+ defer dict_mx.RUnlock()
+ return dict
+}
+
+func GetSearchDictionaryIsValid(entry string) bool {
+ dict_mx.RLock()
+ defer dict_mx.RUnlock()
+ return slices.Contains(dict, entry)
+}
+
+func LoadSearchDictionaries_tx(ctx context.Context, tx pgx.Tx) error {
+ dict_mx.Lock()
+ defer dict_mx.Unlock()
+
+ return tx.QueryRow(ctx, `
+ SELECT ARRAY_AGG(cfgname::TEXT)
+ FROM pg_catalog.pg_ts_config
+ `).Scan(&dict)
+}
diff --git a/cache/cache_ics.go b/cache/cache_ics.go
index 23073e10..b96c759c 100644
--- a/cache/cache_ics.go
+++ b/cache/cache_ics.go
@@ -1,11 +1,13 @@
package cache
import (
+ "context"
"r3/schema/field"
"r3/types"
"sync"
"github.com/gofrs/uuid"
+ "github.com/jackc/pgx/v5"
)
var (
@@ -13,7 +15,7 @@ var (
fieldIdMapIcs = make(map[uuid.UUID]types.FieldCalendar)
)
-func GetCalendarField(fieldId uuid.UUID) (types.FieldCalendar, error) {
+func GetCalendarField_tx(ctx context.Context, tx pgx.Tx, fieldId uuid.UUID) (types.FieldCalendar, error) {
ics_mx.Lock()
defer ics_mx.Unlock()
@@ -22,7 +24,7 @@ func GetCalendarField(fieldId uuid.UUID) (types.FieldCalendar, error) {
return f, nil
}
- f, err := field.GetCalendar(fieldId)
+ f, err := field.GetCalendar_tx(ctx, tx, fieldId)
if err != nil {
return f, err
}
diff --git a/cache/cache_ldap.go b/cache/cache_ldap.go
index 6b251a53..bce9bc7b 100644
--- a/cache/cache_ldap.go
+++ b/cache/cache_ldap.go
@@ -2,7 +2,6 @@ package cache
import (
"errors"
- "r3/ldap"
"r3/types"
"sync"
)
@@ -19,7 +18,6 @@ func GetLdapIdMap() map[int32]types.Ldap {
}
func GetLdap(id int32) (types.Ldap, error) {
-
ldap_mx.Lock()
defer ldap_mx.Unlock()
@@ -30,20 +28,12 @@ func GetLdap(id int32) (types.Ldap, error) {
return ldap, nil
}
-func LoadLdapMap() error {
-
+func SetLdaps(ldaps []types.Ldap) {
ldap_mx.Lock()
defer ldap_mx.Unlock()
- ldaps, err := ldap.Get()
- if err != nil {
- return err
- }
-
ldapIdMap = make(map[int32]types.Ldap)
-
for _, ldap := range ldaps {
ldapIdMap[ldap.Id] = ldap
}
- return nil
}
diff --git a/cache/cache_mail.go b/cache/cache_mail.go
index c598b5fd..0c8721da 100644
--- a/cache/cache_mail.go
+++ b/cache/cache_mail.go
@@ -1,27 +1,29 @@
package cache
import (
+ "context"
"fmt"
- "r3/db"
"r3/types"
"sync"
+
+ "github.com/jackc/pgx/v5"
)
var (
- mail_mx sync.Mutex
+ mail_mx sync.RWMutex
mailAccountIdMap map[int32]types.MailAccount
)
func GetMailAccountMap() map[int32]types.MailAccount {
- mail_mx.Lock()
- defer mail_mx.Unlock()
+ mail_mx.RLock()
+ defer mail_mx.RUnlock()
return mailAccountIdMap
}
func GetMailAccount(id int32, mode string) (types.MailAccount, error) {
- mail_mx.Lock()
- defer mail_mx.Unlock()
+ mail_mx.RLock()
+ defer mail_mx.RUnlock()
ma, exists := mailAccountIdMap[id]
if !exists || mode != ma.Mode {
@@ -31,8 +33,8 @@ func GetMailAccount(id int32, mode string) (types.MailAccount, error) {
}
func GetMailAccountAny(mode string) (types.MailAccount, error) {
- mail_mx.Lock()
- defer mail_mx.Unlock()
+ mail_mx.RLock()
+ defer mail_mx.RUnlock()
for _, ma := range mailAccountIdMap {
if mode == ma.Mode {
@@ -43,21 +45,17 @@ func GetMailAccountAny(mode string) (types.MailAccount, error) {
}
func GetMailAccountsExist() bool {
- mail_mx.Lock()
- defer mail_mx.Unlock()
+ mail_mx.RLock()
+ defer mail_mx.RUnlock()
return len(mailAccountIdMap) != 0
}
-func LoadMailAccountMap() error {
- mail_mx.Lock()
- defer mail_mx.Unlock()
-
- mailAccountIdMap = make(map[int32]types.MailAccount)
+func LoadMailAccountMap_tx(ctx context.Context, tx pgx.Tx) error {
- rows, err := db.Pool.Query(db.Ctx, `
- SELECT id, name, mode, username, password, start_tls, send_as,
- host_name, host_port
+ rows, err := tx.Query(ctx, `
+ SELECT id, oauth_client_id, name, mode, auth_method, username,
+ password, start_tls, send_as, host_name, host_port, comment
FROM instance.mail_account
`)
if err != nil {
@@ -65,11 +63,16 @@ func LoadMailAccountMap() error {
}
defer rows.Close()
+ mail_mx.Lock()
+ defer mail_mx.Unlock()
+
+ mailAccountIdMap = make(map[int32]types.MailAccount)
for rows.Next() {
var ma types.MailAccount
- if err := rows.Scan(&ma.Id, &ma.Name, &ma.Mode, &ma.Username, &ma.Password,
- &ma.StartTls, &ma.SendAs, &ma.HostName, &ma.HostPort); err != nil {
+ if err := rows.Scan(&ma.Id, &ma.OauthClientId, &ma.Name, &ma.Mode,
+ &ma.AuthMethod, &ma.Username, &ma.Password, &ma.StartTls,
+ &ma.SendAs, &ma.HostName, &ma.HostPort, &ma.Comment); err != nil {
return err
}
diff --git a/cache/cache_oauthClient.go b/cache/cache_oauthClient.go
new file mode 100644
index 00000000..57620338
--- /dev/null
+++ b/cache/cache_oauthClient.go
@@ -0,0 +1,61 @@
+package cache
+
+import (
+ "context"
+ "fmt"
+ "r3/types"
+ "sync"
+
+ "github.com/jackc/pgx/v5"
+)
+
+var (
+ oauthClient_mx sync.RWMutex
+ oauthClientIdMap map[int32]types.OauthClient
+)
+
+func GetOauthClientMap() map[int32]types.OauthClient {
+ oauthClient_mx.RLock()
+ defer oauthClient_mx.RUnlock()
+
+ return oauthClientIdMap
+}
+
+func GetOauthClient(id int32) (types.OauthClient, error) {
+ oauthClient_mx.RLock()
+ defer oauthClient_mx.RUnlock()
+
+ c, exists := oauthClientIdMap[id]
+ if !exists {
+ return c, fmt.Errorf("OAUTH client with ID %d does not exist", id)
+ }
+ return c, nil
+}
+
+func LoadOauthClientMap_tx(ctx context.Context, tx pgx.Tx) error {
+
+ rows, err := tx.Query(ctx, `
+ SELECT id, name, client_id, client_secret, date_expiry, scopes, tenant, token_url
+ FROM instance.oauth_client
+ `)
+ if err != nil {
+ return err
+ }
+ defer rows.Close()
+
+ oauthClient_mx.Lock()
+ defer oauthClient_mx.Unlock()
+ oauthClientIdMap = make(map[int32]types.OauthClient)
+
+ for rows.Next() {
+ var c types.OauthClient
+
+ if err := rows.Scan(&c.Id, &c.Name, &c.ClientId, &c.ClientSecret,
+ &c.DateExpiry, &c.Scopes, &c.Tenant, &c.TokenUrl); err != nil {
+
+ return err
+ }
+ oauthClientIdMap[c.Id] = c
+ }
+ return nil
+}
diff --git a/cache/cache_preset.go b/cache/cache_preset.go
index 1fd093a2..74dcb2d3 100644
--- a/cache/cache_preset.go
+++ b/cache/cache_preset.go
@@ -1,25 +1,26 @@
package cache
import (
- "r3/db"
+ "context"
"sync"
"github.com/gofrs/uuid"
+ "github.com/jackc/pgx/v5"
)
var (
- preset_mx sync.Mutex
+ preset_mx sync.RWMutex
presetIdMapRecordId map[uuid.UUID]int64
)
func GetPresetRecordIds() map[uuid.UUID]int64 {
- preset_mx.Lock()
- defer preset_mx.Unlock()
+ preset_mx.RLock()
+ defer preset_mx.RUnlock()
return presetIdMapRecordId
}
func GetPresetRecordId(presetId uuid.UUID) int64 {
- preset_mx.Lock()
- defer preset_mx.Unlock()
+ preset_mx.RLock()
+ defer preset_mx.RUnlock()
v, exists := presetIdMapRecordId[presetId]
if !exists {
@@ -28,13 +29,13 @@ func GetPresetRecordId(presetId uuid.UUID) int64 {
return v
}
-func renewPresetRecordIds() error {
+func renewPresetRecordIds_tx(ctx context.Context, tx pgx.Tx) error {
preset_mx.Lock()
defer preset_mx.Unlock()
presetIdMapRecordId = make(map[uuid.UUID]int64)
- rows, err := db.Pool.Query(db.Ctx, `
+ rows, err := tx.Query(ctx, `
SELECT preset_id, record_id_wofk
FROM instance.preset_record
`)
diff --git a/cache/cache_pwa.go b/cache/cache_pwa.go
new file mode 100644
index 00000000..35fb5dab
--- /dev/null
+++ b/cache/cache_pwa.go
@@ -0,0 +1,88 @@
+package cache
+
+import (
+ "context"
+ "encoding/base64"
+ "r3/db"
+ "sync"
+
+ "github.com/gofrs/uuid"
+ "github.com/jackc/pgx/v5"
+)
+
+var (
+ pwa_mx sync.RWMutex
+ pwaIconIdMap = make(map[uuid.UUID]string)
+ pwaDomainMap = make(map[string]uuid.UUID) // key = sub domain name, value = module ID (for direct app acess)
+)
+
+func GetPwaIcon(id uuid.UUID) (string, error) {
+ pwa_mx.RLock()
+ file, exists := pwaIconIdMap[id]
+ pwa_mx.RUnlock()
+
+ if exists {
+ return file, nil
+ }
+
+ var f []byte
+ if err := db.Pool.QueryRow(context.Background(), `
+ SELECT file
+ FROM app.icon
+ WHERE id = $1
+ `, id).Scan(&f); err != nil {
+ return file, err
+ }
+ file = base64.StdEncoding.EncodeToString(f)
+
+ pwa_mx.Lock()
+ pwaIconIdMap[id] = file
+ pwa_mx.Unlock()
+
+ return file, nil
+}
+
+func GetPwaDomainMap() map[string]uuid.UUID {
+ pwa_mx.RLock()
+ defer pwa_mx.RUnlock()
+
+ return pwaDomainMap
+}
+
+func GetPwaModuleId(subdomain string) uuid.UUID {
+ pwa_mx.RLock()
+ defer pwa_mx.RUnlock()
+
+ id, exists := pwaDomainMap[subdomain]
+ if !exists {
+ return uuid.Nil
+ }
+ return id
+}
+
+func LoadPwaDomainMap_tx(ctx context.Context, tx pgx.Tx) error {
+ pwa_mx.Lock()
+ defer pwa_mx.Unlock()
+
+ pwaDomainMap = make(map[string]uuid.UUID)
+
+ rows, err := tx.Query(ctx, `
+ SELECT module_id, domain
+ FROM instance.pwa_domain
+ `)
+ if err != nil {
+ return err
+ }
+ defer rows.Close()
+
+ for rows.Next() {
+ var modId uuid.UUID
+ var domain string
+
+ if err := rows.Scan(&modId, &domain); err != nil {
+ return err
+ }
+ pwaDomainMap[domain] = modId
+ }
+ return nil
+}
diff --git a/cache/cache_schema.go b/cache/cache_schema.go
index f6c53a17..ccafea33 100644
--- a/cache/cache_schema.go
+++ b/cache/cache_schema.go
@@ -4,21 +4,21 @@
package cache
import (
+ "context"
"encoding/json"
"fmt"
- "r3/config"
- "r3/db"
+ "r3/config/module_meta"
"r3/log"
- "r3/module_option"
"r3/schema/api"
"r3/schema/article"
"r3/schema/attribute"
+ "r3/schema/clientEvent"
"r3/schema/collection"
"r3/schema/form"
"r3/schema/icon"
"r3/schema/jsFunction"
"r3/schema/loginForm"
- "r3/schema/menu"
+ "r3/schema/menuTab"
"r3/schema/module"
"r3/schema/pgFunction"
"r3/schema/pgIndex"
@@ -26,140 +26,143 @@ import (
"r3/schema/preset"
"r3/schema/relation"
"r3/schema/role"
+ "r3/schema/variable"
+ "r3/schema/widget"
"r3/tools"
"r3/types"
"sync"
"github.com/gofrs/uuid"
- "github.com/jackc/pgx/v5/pgtype"
+ "github.com/jackc/pgx/v5"
+ "golang.org/x/exp/maps"
)
-type schemaCacheType struct {
- Modules []types.Module `json:"modules"`
- ModuleOptions []types.ModuleOption `json:"moduleOptions"`
- PresetRecordIds map[uuid.UUID]int64 `json:"presetRecordIds"`
-}
-
var (
// schema cache access and state
Schema_mx sync.RWMutex
- // cached entities for regular use during normal operation
- ModuleIdMap map[uuid.UUID]types.Module // all modules by ID
- ModuleApiNameMapId map[string]map[string]uuid.UUID // all API IDs by module+API name
- RelationIdMap map[uuid.UUID]types.Relation // all relations by ID
- AttributeIdMap map[uuid.UUID]types.Attribute // all attributes by ID
- RoleIdMap map[uuid.UUID]types.Role // all roles by ID
- PgFunctionIdMap map[uuid.UUID]types.PgFunction // all PG functions by ID
- ApiIdMap map[uuid.UUID]types.Api // all APIs by ID
-
// schema cache
- moduleIdsOrdered []uuid.UUID // all module IDs in desired order
- schemaCacheJson json.RawMessage // full schema cache as JSON
- schemaTimestamp int64 // timestamp of last update to schema cache
+ moduleIdMapJson = make(map[uuid.UUID]json.RawMessage) // ID map of module definition as JSON
+ moduleIdMapMeta = make(map[uuid.UUID]types.ModuleMeta) // ID map of module meta data
+
+ // cached entities for regular use during normal operation
+ ModuleIdMap = make(map[uuid.UUID]types.Module) // all modules by ID
+ ModuleApiNameMapId = make(map[string]map[string]uuid.UUID) // all API IDs by module+API name
+ RelationIdMap = make(map[uuid.UUID]types.Relation) // all relations by ID
+ AttributeIdMap = make(map[uuid.UUID]types.Attribute) // all attributes by ID
+ RoleIdMap = make(map[uuid.UUID]types.Role) // all roles by ID
+ PgFunctionIdMap = make(map[uuid.UUID]types.PgFunction) // all PG functions by ID
+ ApiIdMap = make(map[uuid.UUID]types.Api) // all APIs by ID
+ ClientEventIdMap = make(map[uuid.UUID]types.ClientEvent) // all client events by ID
)
-func GetSchemaTimestamp() int64 {
+func GetModuleIdMapMeta() map[uuid.UUID]types.ModuleMeta {
Schema_mx.RLock()
defer Schema_mx.RUnlock()
- return schemaTimestamp
+ return moduleIdMapMeta
}
-func GetSchemaCacheJson() json.RawMessage {
+func GetModuleCacheJson(moduleId uuid.UUID) (json.RawMessage, error) {
Schema_mx.RLock()
defer Schema_mx.RUnlock()
- return schemaCacheJson
+
+ json, exists := moduleIdMapJson[moduleId]
+ if !exists {
+ return []byte{}, fmt.Errorf("module %s does not exist in schema cache", moduleId)
+ }
+ return json, nil
+}
+func LoadModuleIdMapMeta_tx(ctx context.Context, tx pgx.Tx) error {
+ moduleIdMapMetaNew, err := module_meta.GetIdMap_tx(ctx, tx)
+ if err != nil {
+ return err
+ }
+ Schema_mx.Lock()
+ defer Schema_mx.Unlock()
+
+ // apply deletions if relevant
+ for id, _ := range moduleIdMapMeta {
+ if _, exists := moduleIdMapMetaNew[id]; !exists {
+ delete(ModuleIdMap, id)
+ delete(moduleIdMapJson, id)
+ }
+ }
+
+ // set new meta data
+ moduleIdMapMeta = moduleIdMapMetaNew
+ return nil
}
-// update module schema cache in memory
-// takes either single module ID for specific update or NULL for updating all modules
-// can just load schema or create a new version timestamp, which forces reload on clients
-func UpdateSchema(newVersion bool, moduleIdsUpdateOnly []uuid.UUID) error {
+// load all modules into the schema cache
+func LoadSchema_tx(ctx context.Context, tx pgx.Tx) error {
+ return UpdateSchema_tx(ctx, tx, maps.Keys(moduleIdMapMeta), true)
+}
+
+// update module schema cache
+func UpdateSchema_tx(ctx context.Context, tx pgx.Tx, moduleIds []uuid.UUID, initialLoad bool) error {
var err error
- // update schema cache
- if err := updateSchemaCache(moduleIdsUpdateOnly); err != nil {
+ if err := updateSchemaCache_tx(ctx, tx, moduleIds); err != nil {
return err
}
// renew caches, affected by potentially changed modules (preset records, login access)
renewIcsFields()
- if err := renewPresetRecordIds(); err != nil {
+ if err := renewPresetRecordIds_tx(ctx, tx); err != nil {
return err
}
// create JSON copy of schema cache for fast retrieval
- schemaCache := schemaCacheType{
- Modules: make([]types.Module, 0),
- PresetRecordIds: GetPresetRecordIds(),
- }
- schemaCache.ModuleOptions, err = module_option.Get()
- if err != nil {
- return err
- }
- for _, id := range moduleIdsOrdered {
- schemaCache.Modules = append(schemaCache.Modules, ModuleIdMap[id])
- }
-
- schemaCacheJson, err = json.Marshal(schemaCache)
- if err != nil {
- return err
+ for _, id := range moduleIds {
+ Schema_mx.Lock()
+ moduleIdMapJson[id], err = json.Marshal(ModuleIdMap[id])
+ Schema_mx.Unlock()
+ if err != nil {
+ return err
+ }
}
- // set schema timestamp
- // keep timestamp if nothing changed (cache reuse) or renew it (cache refresh)
- if !newVersion {
- schemaTimestamp = int64(config.GetUint64("schemaTimestamp"))
+ if initialLoad {
return nil
}
- schemaTimestamp = tools.GetTimeUnix()
- tx, err := db.Pool.Begin(db.Ctx)
- if err != nil {
+ // update change date for updated modules
+ now := tools.GetTimeUnix()
+ if err := module_meta.SetDateChange_tx(ctx, tx, moduleIds, now); err != nil {
return err
}
- defer tx.Rollback(db.Ctx)
- if err := config.SetUint64_tx(tx, "schemaTimestamp", uint64(schemaTimestamp)); err != nil {
- return err
+ // update module meta cache
+ Schema_mx.Lock()
+ for _, id := range moduleIds {
+ meta, exists := moduleIdMapMeta[id]
+ if !exists {
+ meta, err = module_meta.Get_tx(ctx, tx, id)
+ if err != nil {
+ return err
+ }
+ }
+ meta.DateChange = now
+ moduleIdMapMeta[id] = meta
}
- return tx.Commit(db.Ctx)
+ Schema_mx.Unlock()
+ return nil
}
-func updateSchemaCache(moduleIdsUpdateOnly []uuid.UUID) error {
+func updateSchemaCache_tx(ctx context.Context, tx pgx.Tx, moduleIds []uuid.UUID) error {
Schema_mx.Lock()
defer Schema_mx.Unlock()
- allModules := len(moduleIdsUpdateOnly) == 0
-
- if allModules {
- log.Info("cache", "starting schema processing for all modules")
- moduleIdsOrdered = make([]uuid.UUID, 0)
- ModuleIdMap = make(map[uuid.UUID]types.Module)
- ModuleApiNameMapId = make(map[string]map[string]uuid.UUID)
- RelationIdMap = make(map[uuid.UUID]types.Relation)
- AttributeIdMap = make(map[uuid.UUID]types.Attribute)
- RoleIdMap = make(map[uuid.UUID]types.Role)
- PgFunctionIdMap = make(map[uuid.UUID]types.PgFunction)
- ApiIdMap = make(map[uuid.UUID]types.Api)
- } else {
- log.Info("cache", "starting schema processing for one module")
- }
+ log.Info("cache", fmt.Sprintf("starting schema processing for %d module(s)", len(moduleIds)))
- mods, err := module.Get(moduleIdsUpdateOnly)
+ mods, err := module.Get_tx(ctx, tx, moduleIds)
if err != nil {
return err
}
for _, mod := range mods {
-
- if allModules {
- // store returned module order to create ordered cache
- moduleIdsOrdered = append(moduleIdsOrdered, mod.Id)
- }
-
log.Info("cache", fmt.Sprintf("parsing module '%s'", mod.Name))
mod.Relations = make([]types.Relation, 0)
mod.Forms = make([]types.Form, 0)
- mod.Menus = make([]types.Menu, 0)
+ mod.MenuTabs = make([]types.MenuTab, 0)
mod.Icons = make([]types.Icon, 0)
mod.Roles = make([]types.Role, 0)
mod.Articles = make([]types.Article, 0)
@@ -168,12 +171,15 @@ func updateSchemaCache(moduleIdsUpdateOnly []uuid.UUID) error {
mod.JsFunctions = make([]types.JsFunction, 0)
mod.Collections = make([]types.Collection, 0)
mod.Apis = make([]types.Api, 0)
+ mod.ClientEvents = make([]types.ClientEvent, 0)
+ mod.Variables = make([]types.Variable, 0)
+ mod.Widgets = make([]types.Widget, 0)
ModuleApiNameMapId[mod.Name] = make(map[string]uuid.UUID)
// get articles
log.Info("cache", "load articles")
- mod.Articles, err = article.Get(mod.Id)
+ mod.Articles, err = article.Get_tx(ctx, tx, mod.Id)
if err != nil {
return err
}
@@ -181,7 +187,7 @@ func updateSchemaCache(moduleIdsUpdateOnly []uuid.UUID) error {
// get relations
log.Info("cache", "load relations")
- rels, err := relation.Get(mod.Id)
+ rels, err := relation.Get_tx(ctx, tx, mod.Id)
if err != nil {
return err
}
@@ -189,7 +195,7 @@ func updateSchemaCache(moduleIdsUpdateOnly []uuid.UUID) error {
for _, rel := range rels {
// get attributes
- atrs, err := attribute.Get(rel.Id)
+ atrs, err := attribute.Get_tx(ctx, tx, rel.Id)
if err != nil {
return err
}
@@ -201,19 +207,13 @@ func updateSchemaCache(moduleIdsUpdateOnly []uuid.UUID) error {
}
// get indexes
- rel.Indexes, err = pgIndex.Get(rel.Id)
+ rel.Indexes, err = pgIndex.Get_tx(ctx, tx, rel.Id)
if err != nil {
return err
}
// get presets
- rel.Presets, err = preset.Get(rel.Id)
- if err != nil {
- return err
- }
-
- // get triggers
- rel.Triggers, err = pgTrigger.Get(rel.Id)
+ rel.Presets, err = preset.Get_tx(ctx, tx, rel.Id)
if err != nil {
return err
}
@@ -226,15 +226,15 @@ func updateSchemaCache(moduleIdsUpdateOnly []uuid.UUID) error {
// get forms
log.Info("cache", "load forms")
- mod.Forms, err = form.Get(mod.Id, []uuid.UUID{})
+ mod.Forms, err = form.Get_tx(ctx, tx, mod.Id, []uuid.UUID{})
if err != nil {
return err
}
- // get menus
- log.Info("cache", "load menus")
+ // get menu tabs
+ log.Info("cache", "load menu tabs")
- mod.Menus, err = menu.Get(mod.Id, pgtype.UUID{})
+ mod.MenuTabs, err = menuTab.Get_tx(ctx, tx, mod.Id)
if err != nil {
return err
}
@@ -242,7 +242,7 @@ func updateSchemaCache(moduleIdsUpdateOnly []uuid.UUID) error {
// get icons
log.Info("cache", "load icons")
- mod.Icons, err = icon.Get(mod.Id)
+ mod.Icons, err = icon.Get_tx(ctx, tx, mod.Id)
if err != nil {
return err
}
@@ -250,7 +250,7 @@ func updateSchemaCache(moduleIdsUpdateOnly []uuid.UUID) error {
// get roles
log.Info("cache", "load roles")
- mod.Roles, err = role.Get(mod.Id)
+ mod.Roles, err = role.Get_tx(ctx, tx, mod.Id)
if err != nil {
return err
}
@@ -263,7 +263,13 @@ func updateSchemaCache(moduleIdsUpdateOnly []uuid.UUID) error {
// get login forms
log.Info("cache", "load login forms")
- mod.LoginForms, err = loginForm.Get(mod.Id)
+ mod.LoginForms, err = loginForm.Get_tx(ctx, tx, mod.Id)
+ if err != nil {
+ return err
+ }
+
+ // get triggers
+ mod.PgTriggers, err = pgTrigger.Get_tx(ctx, tx, mod.Id)
if err != nil {
return err
}
@@ -271,7 +277,7 @@ func updateSchemaCache(moduleIdsUpdateOnly []uuid.UUID) error {
// store & backfill PG functions
log.Info("cache", "load PG functions")
- mod.PgFunctions, err = pgFunction.Get(mod.Id)
+ mod.PgFunctions, err = pgFunction.Get_tx(ctx, tx, mod.Id)
if err != nil {
return err
}
@@ -282,7 +288,7 @@ func updateSchemaCache(moduleIdsUpdateOnly []uuid.UUID) error {
// get JS functions
log.Info("cache", "load JS functions")
- mod.JsFunctions, err = jsFunction.Get(mod.Id)
+ mod.JsFunctions, err = jsFunction.Get_tx(ctx, tx, mod.Id)
if err != nil {
return err
}
@@ -290,7 +296,7 @@ func updateSchemaCache(moduleIdsUpdateOnly []uuid.UUID) error {
// get collections
log.Info("cache", "load collections")
- mod.Collections, err = collection.Get(mod.Id)
+ mod.Collections, err = collection.Get_tx(ctx, tx, mod.Id)
if err != nil {
return err
}
@@ -298,7 +304,7 @@ func updateSchemaCache(moduleIdsUpdateOnly []uuid.UUID) error {
// get APIs
log.Info("cache", "load APIs")
- mod.Apis, err = api.Get(mod.Id, uuid.Nil)
+ mod.Apis, err = api.Get_tx(ctx, tx, mod.Id, uuid.Nil)
if err != nil {
return err
}
@@ -307,6 +313,33 @@ func updateSchemaCache(moduleIdsUpdateOnly []uuid.UUID) error {
ModuleApiNameMapId[mod.Name][fmt.Sprintf("%s.v%d", a.Name, a.Version)] = a.Id
}
+ // get client events
+ log.Info("cache", "load client events")
+
+ mod.ClientEvents, err = clientEvent.Get_tx(ctx, tx, mod.Id)
+ if err != nil {
+ return err
+ }
+ for _, ce := range mod.ClientEvents {
+ ClientEventIdMap[ce.Id] = ce
+ }
+
+ // get variables
+ log.Info("cache", "load variables")
+
+ mod.Variables, err = variable.Get_tx(ctx, tx, mod.Id)
+ if err != nil {
+ return err
+ }
+
+ // get widgets
+ log.Info("cache", "load widgets")
+
+ mod.Widgets, err = widget.Get_tx(ctx, tx, mod.Id)
+ if err != nil {
+ return err
+ }
+
// update cache map with parsed module
ModuleIdMap[mod.Id] = mod
}
diff --git a/cache/clients/r3_client_amd64_linux.bin b/cache/clients/r3_client_amd64_linux.bin
index fb1a418a..86654269 100644
Binary files a/cache/clients/r3_client_amd64_linux.bin and b/cache/clients/r3_client_amd64_linux.bin differ
diff --git a/cache/clients/r3_client_amd64_mac.dmg b/cache/clients/r3_client_amd64_mac.dmg
index 8efa56c9..6a1f6610 100644
Binary files a/cache/clients/r3_client_amd64_mac.dmg and b/cache/clients/r3_client_amd64_mac.dmg differ
diff --git a/cache/clients/r3_client_amd64_win.exe b/cache/clients/r3_client_amd64_win.exe
index 21392ff0..b4bde242 100644
Binary files a/cache/clients/r3_client_amd64_win.exe and b/cache/clients/r3_client_amd64_win.exe differ
diff --git a/cache/clients/r3_client_arm64_linux.bin b/cache/clients/r3_client_arm64_linux.bin
index 060473cd..e5c81d81 100644
Binary files a/cache/clients/r3_client_arm64_linux.bin and b/cache/clients/r3_client_arm64_linux.bin differ
diff --git a/cache/packages/core_company.rei3 b/cache/packages/core_company.rei3
index 9a9bae08..30d694af 100644
Binary files a/cache/packages/core_company.rei3 and b/cache/packages/core_company.rei3 differ
diff --git a/cluster/cluster.go b/cluster/cluster.go
index 4bf8d9ff..449e671a 100644
--- a/cluster/cluster.go
+++ b/cluster/cluster.go
@@ -1,6 +1,7 @@
package cluster
import (
+ "context"
"encoding/json"
"r3/cache"
"r3/config"
@@ -11,21 +12,17 @@ import (
"github.com/gofrs/uuid"
"github.com/jackc/pgx/v5"
+ "github.com/jackc/pgx/v5/pgtype"
)
var (
SchedulerRestart = make(chan bool, 10)
- websocketClientCount int
- WebsocketClientEvents = make(chan types.ClusterWebsocketClientEvent, 10)
+ WebsocketClientEvents = make(chan types.ClusterEvent, 10)
)
-func SetWebsocketClientCount(value int) {
- websocketClientCount = value
-}
-
// register cluster node with shared database
// read existing node ID from configuration file if exists
-func StartNode() error {
+func StartNode_tx(ctx context.Context, tx pgx.Tx) error {
// create node ID for itself if it does not exist yet
if config.File.Cluster.NodeId == "" {
@@ -50,7 +47,7 @@ func StartNode() error {
// check whether node is already registered
var nodeName string
- err = db.Pool.QueryRow(db.Ctx, `
+ err = tx.QueryRow(ctx, `
SELECT name
FROM instance_cluster.node
WHERE id = $1
@@ -63,23 +60,23 @@ func StartNode() error {
if !exists {
// generate new node name
- if err := db.Pool.QueryRow(db.Ctx, `
+ if err := tx.QueryRow(ctx, `
SELECT CONCAT('node',(COUNT(*)+1)::TEXT)
FROM instance_cluster.node
`).Scan(&nodeName); err != nil {
return err
}
- if _, err := db.Pool.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
INSERT INTO instance_cluster.node (id,name,hostname,date_started,
- date_check_in,stat_sessions,stat_memory,cluster_master,running)
- VALUES ($1,$2,$3,$4,0,-1,-1,false,true)
- `, nodeId, nodeName, cache.GetHostname(), tools.GetTimeUnix()); err != nil {
+ date_check_in,stat_memory,cluster_master,running)
+ VALUES ($1,$2,$3,$4,0,-1,false,true)
+ `, nodeId, nodeName, config.GetHostname(), tools.GetTimeUnix()); err != nil {
return err
}
} else {
// node is starting up - set start time, disable master role and delete missed events
- if _, err := db.Pool.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
UPDATE instance_cluster.node
SET date_started = $1, cluster_master = false, running = true
WHERE id = $2
@@ -87,7 +84,7 @@ func StartNode() error {
return err
}
- if _, err := db.Pool.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
DELETE FROM instance_cluster.node_event
WHERE node_id = $1
`, nodeId); err != nil {
@@ -101,28 +98,28 @@ func StartNode() error {
log.SetNodeId(nodeId)
return nil
}
-func StopNode() error {
+func StopNode(ctx context.Context) error {
// on shutdown: Give up master role and disable running state
- _, err := db.Pool.Exec(db.Ctx, `
+ _, err := db.Pool.Exec(ctx, `
UPDATE instance_cluster.node
SET cluster_master = false, running = false
WHERE id = $1
`, cache.GetNodeId())
return err
}
-func DelNode_tx(tx pgx.Tx, id uuid.UUID) error {
- _, err := db.Pool.Exec(db.Ctx, `
+func DelNode_tx(ctx context.Context, tx pgx.Tx, id uuid.UUID) error {
+ _, err := tx.Exec(ctx, `
DELETE FROM instance_cluster.node
WHERE id = $1
`, id)
return err
}
-func GetNodes() ([]types.ClusterNode, error) {
+func GetNodes_tx(ctx context.Context, tx pgx.Tx) ([]types.ClusterNode, error) {
nodes := make([]types.ClusterNode, 0)
- rows, err := db.Pool.Query(db.Ctx, `
+ rows, err := tx.Query(ctx, `
SELECT id, name, hostname, cluster_master, running,
- date_check_in, date_started, stat_memory, stat_sessions
+ date_check_in, date_started, stat_memory
FROM instance_cluster.node
ORDER BY name
`)
@@ -135,8 +132,7 @@ func GetNodes() ([]types.ClusterNode, error) {
var n types.ClusterNode
if err := rows.Scan(&n.Id, &n.Name, &n.Hostname, &n.ClusterMaster,
- &n.Running, &n.DateCheckIn, &n.DateStarted, &n.StatMemory,
- &n.StatSessions); err != nil {
+ &n.Running, &n.DateCheckIn, &n.DateStarted, &n.StatMemory); err != nil {
return nodes, err
}
@@ -144,8 +140,8 @@ func GetNodes() ([]types.ClusterNode, error) {
}
return nodes, nil
}
-func SetNode_tx(tx pgx.Tx, id uuid.UUID, name string) error {
- _, err := db.Pool.Exec(db.Ctx, `
+func SetNode_tx(ctx context.Context, tx pgx.Tx, id uuid.UUID, name string) error {
+ _, err := tx.Exec(ctx, `
UPDATE instance_cluster.node
SET name = $1
WHERE id = $2
@@ -154,29 +150,60 @@ func SetNode_tx(tx pgx.Tx, id uuid.UUID, name string) error {
}
// helper
-func CreateEventForNode(nodeId uuid.UUID, content string, payload interface{}) error {
+// creates node events to some nodes (by node IDs) or all but the current node (if no node IDs are given)
+func CreateEventForNodes_tx(ctx context.Context, tx pgx.Tx, nodeIds []uuid.UUID, content string, payload interface{}, target types.ClusterEventTarget) error {
payloadJson, err := json.Marshal(payload)
if err != nil {
return err
}
- _, err = db.Pool.Exec(db.Ctx, `
- INSERT INTO instance_cluster.node_event (node_id,content,payload)
- VALUES ($1,$2,$3)
- `, nodeId, content, payloadJson)
- return err
-}
-func createEventsForOtherNodes(content string, payload interface{}) error {
- payloadJson, err := json.Marshal(payload)
- if err != nil {
- return err
+ address := pgtype.Text{
+ String: target.Address,
+ Valid: target.Address != "",
+ }
+ device := pgtype.Int2{
+ Int16: int16(target.Device),
+ Valid: target.Device != 0,
+ }
+ loginId := pgtype.Int8{
+ Int64: target.LoginId,
+ Valid: target.LoginId != 0,
}
- _, err = db.Pool.Exec(db.Ctx, `
- INSERT INTO instance_cluster.node_event (node_id,content,payload)
- SELECT id,$1,$2
- FROM instance_cluster.node
- WHERE id <> $3
- `, content, payloadJson, cache.GetNodeId())
- return err
+ // only generate events for nodes that have checked in within the last hour
+ // node events are temporary and not relevant for nodes checking in after the fact
+ checkInCutOff := tools.GetTimeUnix() - 3600
+
+ if len(nodeIds) == 0 {
+ // if no node IDs are defined, apply to all other nodes
+ if _, err := tx.Exec(ctx, `
+ INSERT INTO instance_cluster.node_event (
+ node_id, content, payload, target_address,
+ target_device, target_login_id
+ )
+ SELECT id, $1, $2, $3, $4, $5
+ FROM instance_cluster.node
+ WHERE id <> $6
+ AND date_check_in > $7
+ `, content, payloadJson, address, device, loginId, cache.GetNodeId(), checkInCutOff); err != nil {
+ return err
+ }
+ } else {
+ if _, err := tx.Exec(ctx, `
+ INSERT INTO instance_cluster.node_event (
+ node_id, content, payload, target_address,
+ target_device, target_login_id
+ )
+ SELECT id, $1, $2, $3, $4, $5
+ FROM instance_cluster.node
+ WHERE id = ANY($6)
+ AND date_check_in > $7
+ `, content, payloadJson, address, device, loginId, nodeIds, checkInCutOff); err != nil {
+ return err
+ }
+ }
+ return nil
+}
+func createEventsForOtherNodes_tx(ctx context.Context, tx pgx.Tx, content string, payload interface{}, target types.ClusterEventTarget) error {
+ return CreateEventForNodes_tx(ctx, tx, []uuid.UUID{}, content, payload, target)
}
diff --git a/cluster/cluster_tasks.go b/cluster/cluster_tasks.go
index 44e6f78e..01169d56 100644
--- a/cluster/cluster_tasks.go
+++ b/cluster/cluster_tasks.go
@@ -1,8 +1,8 @@
package cluster
import (
+ "context"
"fmt"
- "r3/activation"
"r3/bruteforce"
"r3/cache"
"r3/config"
@@ -22,20 +22,17 @@ func CheckInNode() error {
var m runtime.MemStats
runtime.ReadMemStats(&m)
- if _, err := db.Pool.Exec(db.Ctx, `
+ if _, err := db.Pool.Exec(context.Background(), `
UPDATE instance_cluster.node
- SET date_check_in = $1, hostname = $2,
- stat_memory = $3, stat_sessions = $4
- WHERE id = $5
- `, tools.GetTimeUnix(), cache.GetHostname(), (m.Sys / 1024 / 1024),
- websocketClientCount, cache.GetNodeId()); err != nil {
-
+ SET date_check_in = $1, hostname = $2, stat_memory = $3
+ WHERE id = $4
+ `, tools.GetTimeUnix(), config.GetHostname(), (m.Sys / 1024 / 1024), cache.GetNodeId()); err != nil {
return err
}
// check whether current cluster master is doing its job
var masterLastCheckIn int64
- if err := db.Pool.QueryRow(db.Ctx, `
+ if err := db.Pool.QueryRow(context.Background(), `
SELECT date_check_in
FROM instance_cluster.node
WHERE cluster_master
@@ -47,7 +44,7 @@ func CheckInNode() error {
log.Info("cluster", "node has recognized an absent master, requesting role for itself")
// cluster master missing, request cluster master role for this node
- if _, err := db.Pool.Exec(db.Ctx, `
+ if _, err := db.Pool.Exec(context.Background(), `
SELECT instance_cluster.master_role_request($1)
`, cache.GetNodeId()); err != nil {
return err
@@ -57,137 +54,219 @@ func CheckInNode() error {
}
// events relevant to all cluster nodes
-func CollectionUpdated(collectionId uuid.UUID, loginIds []int64) error {
+func ClientEventsChanged_tx(ctx context.Context, tx pgx.Tx, updateNodes bool, address string, loginId int64) error {
+ target := types.ClusterEventTarget{Address: address, Device: types.WebsocketClientDeviceFatClient, LoginId: loginId}
- if len(loginIds) == 0 {
- // no logins defined, update for all
- WebsocketClientEvents <- types.ClusterWebsocketClientEvent{LoginId: 0, CollectionChanged: collectionId}
- return nil
+ if updateNodes {
+ if err := createEventsForOtherNodes_tx(ctx, tx, "clientEventsChanged", nil, target); err != nil {
+ return err
+ }
}
-
- // logins defined, update for specific logins
- for _, id := range loginIds {
- WebsocketClientEvents <- types.ClusterWebsocketClientEvent{LoginId: id, CollectionChanged: collectionId}
+ WebsocketClientEvents <- types.ClusterEvent{
+ Content: "clientEventsChanged",
+ Payload: nil,
+ Target: target,
}
return nil
}
-func ConfigChanged(updateNodes bool, loadConfigFromDb bool, switchToMaintenance bool) error {
+func CollectionsUpdated(updates []types.ClusterEventCollectionUpdated) {
+
+ // if triggers are badly designed or bulk updates executed, many identical collection updates can be triggered at once
+ collectionIdMapGlobal := make(map[uuid.UUID]bool)
+ collectionIdMapLogins := make(map[uuid.UUID]map[int64]bool)
+
+ // first, go through global collection updates (for all logins)
+ for _, upd := range updates {
+ if len(upd.LoginIds) != 0 {
+ continue
+ }
+ if _, exists := collectionIdMapGlobal[upd.CollectionId]; !exists {
+ collectionIdMapGlobal[upd.CollectionId] = true
+
+ WebsocketClientEvents <- types.ClusterEvent{
+ Content: "collectionChanged",
+ Payload: upd.CollectionId,
+ Target: types.ClusterEventTarget{Device: types.WebsocketClientDeviceBrowser},
+ }
+ }
+ }
+
+ // go through collection updates for specific logins
+ for _, upd := range updates {
+ if len(upd.LoginIds) == 0 {
+ continue
+ }
+
+ // no need to update for specific logins, if global update already exists
+ if _, exists := collectionIdMapGlobal[upd.CollectionId]; exists {
+ continue
+ }
+
+ // update for specific logins, if not done already
+ if _, exists := collectionIdMapLogins[upd.CollectionId]; !exists {
+ collectionIdMapLogins[upd.CollectionId] = make(map[int64]bool)
+ }
+
+ for _, loginId := range upd.LoginIds {
+ if _, exists := collectionIdMapLogins[upd.CollectionId][loginId]; !exists {
+ collectionIdMapLogins[upd.CollectionId][loginId] = true
+
+ WebsocketClientEvents <- types.ClusterEvent{
+ Content: "collectionChanged",
+ Payload: upd.CollectionId,
+ Target: types.ClusterEventTarget{Device: types.WebsocketClientDeviceBrowser, LoginId: loginId},
+ }
+ }
+ }
+ }
+}
+func ConfigChanged_tx(ctx context.Context, tx pgx.Tx, updateNodes bool, loadConfigFromDb bool, productionModeChange bool) error {
if updateNodes {
- if err := createEventsForOtherNodes("configChanged", types.ClusterEventConfigChanged{
- SwitchToMaintenance: switchToMaintenance,
- }); err != nil {
+ if err := createEventsForOtherNodes_tx(ctx, tx, "configChanged", productionModeChange, types.ClusterEventTarget{}); err != nil {
return err
}
}
// load all config settings from the database
if loadConfigFromDb {
- config.LoadFromDb()
- }
-
- // update websocket clients if relevant config changed
- if switchToMaintenance {
- WebsocketClientEvents <- types.ClusterWebsocketClientEvent{LoginId: 0, KickNonAdmin: true}
+ config.LoadFromDb_tx(ctx, tx)
}
// inform clients about changed config
- WebsocketClientEvents <- types.ClusterWebsocketClientEvent{LoginId: 0, ConfigChanged: true}
+ if productionModeChange {
+ WebsocketClientEvents <- types.ClusterEvent{Content: "kickNonAdmin"}
+ }
+ WebsocketClientEvents <- types.ClusterEvent{Content: "configChanged"}
// apply config to other areas
- activation.SetLicense()
bruteforce.SetConfig()
+ config.ActivateLicense()
config.SetLogLevels()
return nil
}
-func FilesCopied(updateNodes bool, loginId int64, attributeId uuid.UUID,
- fileIds []uuid.UUID, recordId int64) error {
+func FilesCopied_tx(ctx context.Context, tx pgx.Tx, updateNodes bool, address string, loginId int64,
+ attributeId uuid.UUID, fileIds []uuid.UUID, recordId int64) error {
+
+ target := types.ClusterEventTarget{Address: address, Device: types.WebsocketClientDeviceBrowser, LoginId: loginId}
+ payload := types.ClusterEventFilesCopied{
+ AttributeId: attributeId,
+ FileIds: fileIds,
+ RecordId: recordId,
+ }
if updateNodes {
- if err := createEventsForOtherNodes("filesCopied", types.ClusterEventFilesCopied{
- LoginId: loginId,
- AttributeId: attributeId,
- FileIds: fileIds,
- RecordId: recordId,
- }); err != nil {
+ if err := createEventsForOtherNodes_tx(ctx, tx, "filesCopied", payload, target); err != nil {
return err
}
}
- WebsocketClientEvents <- types.ClusterWebsocketClientEvent{
- LoginId: loginId,
- FilesCopiedAttributeId: attributeId,
- FilesCopiedFileIds: fileIds,
- FilesCopiedRecordId: recordId,
+ WebsocketClientEvents <- types.ClusterEvent{
+ Content: "filesCopied",
+ Payload: payload,
+ Target: target,
}
return nil
}
-func FileRequested(updateNodes bool, loginId int64, attributeId uuid.UUID,
+func FileRequested_tx(ctx context.Context, tx pgx.Tx, updateNodes bool, address string, loginId int64, attributeId uuid.UUID,
fileId uuid.UUID, fileHash string, fileName string, chooseApp bool) error {
+ target := types.ClusterEventTarget{Address: address, Device: types.WebsocketClientDeviceFatClient, LoginId: loginId}
+ payload := types.ClusterEventFileRequested{
+ AttributeId: attributeId,
+ ChooseApp: chooseApp,
+ FileId: fileId,
+ FileHash: fileHash,
+ FileName: fileName,
+ }
+
+ if updateNodes {
+ if err := createEventsForOtherNodes_tx(ctx, tx, "fileRequested", payload, target); err != nil {
+ return err
+ }
+ }
+ WebsocketClientEvents <- types.ClusterEvent{
+ Content: "fileRequested",
+ Payload: payload,
+ Target: target,
+ }
+ return nil
+}
+func JsFunctionCalled_tx(ctx context.Context, tx pgx.Tx, updateNodes bool, address string,
+ loginId int64, moduleId uuid.UUID, jsFunctionId uuid.UUID, arguments []interface{}) error {
+
+ target := types.ClusterEventTarget{Address: address, Device: types.WebsocketClientDeviceBrowser, LoginId: loginId, PwaModuleIdPreferred: moduleId}
+ payload := types.ClusterEventJsFunctionCalled{
+ ModuleId: moduleId,
+ JsFunctionId: jsFunctionId,
+ Arguments: arguments,
+ }
+
if updateNodes {
- if err := createEventsForOtherNodes("fileRequested", types.ClusterEventFileRequested{
- LoginId: loginId,
- AttributeId: attributeId,
- ChooseApp: chooseApp,
- FileId: fileId,
- FileHash: fileHash,
- FileName: fileName,
- }); err != nil {
+ if err := createEventsForOtherNodes_tx(ctx, tx, "jsFunctionCalled", payload, target); err != nil {
return err
}
}
- WebsocketClientEvents <- types.ClusterWebsocketClientEvent{
- LoginId: loginId,
- FileRequestedAttributeId: attributeId,
- FileRequestedChooseApp: chooseApp,
- FileRequestedFileId: fileId,
- FileRequestedFileHash: fileHash,
- FileRequestedFileName: fileName,
+ WebsocketClientEvents <- types.ClusterEvent{
+ Content: "jsFunctionCalled",
+ Payload: payload,
+ Target: target,
}
return nil
}
-func LoginDisabled(updateNodes bool, loginId int64) error {
+func KeystrokesRequested_tx(ctx context.Context, tx pgx.Tx, updateNodes bool, address string, loginId int64, keystrokes string) error {
+ target := types.ClusterEventTarget{Address: address, Device: types.WebsocketClientDeviceFatClient, LoginId: loginId}
if updateNodes {
- if err := createEventsForOtherNodes("loginDisabled", types.ClusterEventLogin{
- LoginId: loginId,
- }); err != nil {
+ if err := createEventsForOtherNodes_tx(ctx, tx, "keystrokesRequested", keystrokes, target); err != nil {
return err
}
}
- WebsocketClientEvents <- types.ClusterWebsocketClientEvent{LoginId: loginId, Kick: true}
+ WebsocketClientEvents <- types.ClusterEvent{
+ Content: "keystrokesRequested",
+ Payload: keystrokes,
+ Target: target,
+ }
+ return nil
+}
+func LoginDisabled_tx(ctx context.Context, tx pgx.Tx, updateNodes bool, loginId int64) error {
+ target := types.ClusterEventTarget{LoginId: loginId}
+ if updateNodes {
+ if err := createEventsForOtherNodes_tx(ctx, tx, "loginDisabled", nil, target); err != nil {
+ return err
+ }
+ }
+ WebsocketClientEvents <- types.ClusterEvent{Content: "kick", Target: target}
return nil
}
-func LoginReauthorized(updateNodes bool, loginId int64) error {
+func LoginReauthorized_tx(ctx context.Context, tx pgx.Tx, updateNodes bool, loginId int64) error {
+ target := types.ClusterEventTarget{LoginId: loginId}
if updateNodes {
- if err := createEventsForOtherNodes("loginReauthorized", types.ClusterEventLogin{
- LoginId: loginId,
- }); err != nil {
+ if err := createEventsForOtherNodes_tx(ctx, tx, "loginReauthorized", nil, target); err != nil {
return err
}
}
// renew access cache
- if err := cache.RenewAccessById(loginId); err != nil {
+ if err := cache.RenewAccessById_tx(ctx, tx, loginId); err != nil {
return err
}
// inform client to retrieve new access cache
- WebsocketClientEvents <- types.ClusterWebsocketClientEvent{LoginId: loginId, Renew: true}
+ WebsocketClientEvents <- types.ClusterEvent{Content: "renew", Target: target}
return nil
}
-func LoginReauthorizedAll(updateNodes bool) error {
+func LoginReauthorizedAll_tx(ctx context.Context, tx pgx.Tx, updateNodes bool) error {
if updateNodes {
- if err := createEventsForOtherNodes("loginReauthorizedAll", nil); err != nil {
+ if err := createEventsForOtherNodes_tx(ctx, tx, "loginReauthorizedAll", nil, types.ClusterEventTarget{}); err != nil {
return err
}
}
// renew access cache for all logins
- if err := cache.RenewAccessAll(); err != nil {
+ if err := cache.RenewAccessAll_tx(ctx, tx); err != nil {
return err
}
// inform clients to retrieve new access cache
- WebsocketClientEvents <- types.ClusterWebsocketClientEvent{LoginId: 0, Renew: true}
+ WebsocketClientEvents <- types.ClusterEvent{Content: "renew"}
return nil
}
func MasterAssigned(state bool) error {
@@ -198,48 +277,51 @@ func MasterAssigned(state bool) error {
SchedulerRestart <- true
return nil
}
-func SchemaChangedAll(updateNodes bool, newVersion bool) error {
- return SchemaChanged(updateNodes, newVersion, make([]uuid.UUID, 0))
-}
-func SchemaChanged(updateNodes bool, newVersion bool, moduleIdsUpdateOnly []uuid.UUID) error {
+func SchemaChanged_tx(ctx context.Context, tx pgx.Tx, updateNodes bool, moduleIds []uuid.UUID) error {
+ target := types.ClusterEventTarget{Device: types.WebsocketClientDeviceBrowser}
+
if updateNodes {
- if err := createEventsForOtherNodes("schemaChanged", types.ClusterEventSchemaChanged{
- ModuleIdsUpdateOnly: moduleIdsUpdateOnly,
- NewVersion: newVersion,
- }); err != nil {
+ if err := createEventsForOtherNodes_tx(ctx, tx, "schemaChanged", moduleIds, target); err != nil {
return err
}
}
// inform all clients about schema reloading
- WebsocketClientEvents <- types.ClusterWebsocketClientEvent{LoginId: 0, SchemaLoading: true}
+ WebsocketClientEvents <- types.ClusterEvent{Content: "schemaLoading", Target: target}
+ // inform all clients about schema loading being finished, regardless of success or error
defer func() {
- // inform regardless of success or error
- WebsocketClientEvents <- types.ClusterWebsocketClientEvent{
- LoginId: 0,
- SchemaTimestamp: int64(config.GetUint64("schemaTimestamp"))}
+ WebsocketClientEvents <- types.ClusterEvent{Content: "schemaLoaded", Target: target}
}()
- if err := cache.UpdateSchema(newVersion, moduleIdsUpdateOnly); err != nil {
- return err
- }
+ if len(moduleIds) != 0 {
+ // modules were changed, update schema & access cache
+ if err := cache.UpdateSchema_tx(ctx, tx, moduleIds, false); err != nil {
+ return err
+ }
+ if err := cache.RenewAccessAll_tx(ctx, tx); err != nil {
+ return err
+ }
- // renew access cache for all logins
- if err := cache.RenewAccessAll(); err != nil {
- return err
+ // inform clients to retrieve new access cache
+ WebsocketClientEvents <- types.ClusterEvent{Content: "renew"}
+ } else {
+ // no module IDs are given if modules were deleted, module options were changed, or custom captions were updated
+ if err := cache.LoadModuleIdMapMeta_tx(ctx, tx); err != nil {
+ return err
+ }
+ if err := cache.LoadCaptionMapCustom_tx(ctx, tx); err != nil {
+ return err
+ }
}
- // reload scheduler as module schedules could have changed
+ // reload scheduler as module schedules could have changed (modules changed or deleted)
SchedulerRestart <- true
-
- // inform clients to retrieve new access cache
- WebsocketClientEvents <- types.ClusterWebsocketClientEvent{LoginId: 0, Renew: true}
return nil
}
-func TasksChanged(updateNodes bool) error {
+func TasksChanged_tx(ctx context.Context, tx pgx.Tx, updateNodes bool) error {
if updateNodes {
- if err := createEventsForOtherNodes("tasksChanged", nil); err != nil {
+ if err := createEventsForOtherNodes_tx(ctx, tx, "tasksChanged", nil, types.ClusterEventTarget{}); err != nil {
return err
}
}
diff --git a/compatible/compatible.go b/compatible/compatible.go
deleted file mode 100644
index cbcd725f..00000000
--- a/compatible/compatible.go
+++ /dev/null
@@ -1,249 +0,0 @@
-/* central package for fixing issues with modules from older versions */
-package compatible
-
-import (
- "encoding/json"
- "fmt"
- "r3/db"
- "r3/tools"
- "r3/types"
-
- "github.com/gofrs/uuid"
- "github.com/jackc/pgx/v5"
- "github.com/jackc/pgx/v5/pgtype"
-)
-
-// < 3.3
-// migrate attribute content use
-func FixAttributeContentUse(contentUse string) string {
- if contentUse == "" {
- return "default"
- }
- return contentUse
-}
-func MigrateDisplayToContentUse_tx(tx pgx.Tx, attributeId uuid.UUID, display string) (string, error) {
-
- if tools.StringInSlice(display, []string{"textarea",
- "richtext", "date", "datetime", "time", "color"}) {
-
- _, err := tx.Exec(db.Ctx, `
- UPDATE app.attribute
- SET content_use = $1
- WHERE id = $2
- `, display, attributeId)
-
- return "default", err
- }
- return display, nil
-}
-
-// < 3.2
-// migrate old module/form help pages to help articles
-func FixCaptions_tx(tx pgx.Tx, entity string, entityId uuid.UUID, captionMap types.CaptionMap) (types.CaptionMap, error) {
-
- var articleId uuid.UUID
- var moduleId uuid.UUID
- var name string
-
- switch entity {
- case "module":
- moduleId = entityId
- name = "Migrated from application help"
- case "form":
- if err := tx.QueryRow(db.Ctx, `
- SELECT module_id, CONCAT('Migrated from form help of ', name)
- FROM app.form
- WHERE id = $1
- `, entityId).Scan(&moduleId, &name); err != nil {
- return captionMap, err
- }
- default:
- return captionMap, fmt.Errorf("invalid entity for help->article migration '%s'", entity)
- }
-
- for content, langMap := range captionMap {
- if content != "moduleHelp" && content != "formHelp" {
- continue
- }
-
- // delete outdated caption entry
- delete(captionMap, content)
-
- // check whether there is anything to migrate
- anyValue := false
- for _, value := range langMap {
- if value != "" {
- anyValue = true
- break
- }
- }
- if !anyValue {
- continue
- }
-
- // check edge case: installed < 3.2 module gets another < 3.2 update
- // this would cause duplicates of migration articles
- // solution: we do not touch migrated articles until a version >= 3.2 is released,
- // in which module authors can handle/update the migrated articles
- exists := false
- if err := tx.QueryRow(db.Ctx, `
- SELECT EXISTS (
- SELECT id
- FROM app.article
- WHERE module_id = $1
- AND name = $2
- )
- `, moduleId, name).Scan(&exists); err != nil {
- return captionMap, err
- }
- if exists {
- continue
- }
-
- if err := tx.QueryRow(db.Ctx, `
- INSERT INTO app.article (id, module_id, name)
- VALUES (gen_random_uuid(), $1, $2)
- RETURNING id
- `, moduleId, name).Scan(&articleId); err != nil {
- return captionMap, err
- }
-
- for langCode, value := range langMap {
- if _, err := tx.Exec(db.Ctx, `
- INSERT INTO app.caption (article_id, content, language_code, value)
- VALUES ($1, 'articleBody', $2, $3)
- `, articleId, langCode, value); err != nil {
- return captionMap, err
- }
- }
-
- switch content {
- case "moduleHelp":
- if _, err := tx.Exec(db.Ctx, `
- INSERT INTO app.article_help (article_id, module_id, position)
- VALUES ($1, $2, 0)
- `, articleId, moduleId); err != nil {
- return captionMap, err
- }
- case "formHelp":
- if _, err := tx.Exec(db.Ctx, `
- INSERT INTO app.article_form (article_id, form_id, position)
- VALUES ($1, $2, 0)
- `, articleId, entityId); err != nil {
- return captionMap, err
- }
- }
- }
- return captionMap, nil
-}
-
-// < 3.1
-// fix legacy file attribute format
-func FixLegacyFileAttributeValue(jsonValue []byte) []types.DataGetValueFile {
-
- // legacy format
- var files struct {
- Files []types.DataGetValueFile `json:"files"`
- }
- if err := json.Unmarshal(jsonValue, &files); err == nil && len(files.Files) != 0 {
- return files.Files
- }
-
- // current format
- var filesNew []types.DataGetValueFile
- json.Unmarshal(jsonValue, &filesNew)
- return filesNew
-}
-
-// < 2.7
-// migrate to new format of form state conditions
-func MigrateNewConditions(c types.FormStateCondition) types.FormStateCondition {
-
- // if either sides content is filled, new version is used, nothing to do
- if c.Side0.Content != "" || c.Side1.Content != "" {
- return c
- }
-
- // set empty
- c.Side0.CollectionId.Valid = false
- c.Side0.ColumnId.Valid = false
- c.Side0.FieldId.Valid = false
- c.Side0.PresetId.Valid = false
- c.Side0.RoleId.Valid = false
- c.Side0.Value.Valid = false
- c.Side1.CollectionId.Valid = false
- c.Side1.ColumnId.Valid = false
- c.Side1.FieldId.Valid = false
- c.Side1.PresetId.Valid = false
- c.Side1.RoleId.Valid = false
- c.Side1.Value.Valid = false
-
- c.Side0.Brackets = c.Brackets0
- c.Side1.Brackets = c.Brackets1
-
- if c.FieldChanged.Valid {
- c.Side0.Content = "fieldChanged"
- c.Side1.Content = "true"
- c.Side0.FieldId = c.FieldId0
-
- c.Operator = "="
- if !c.FieldChanged.Bool {
- c.Operator = "<>"
- }
- } else if c.NewRecord.Valid {
- c.Side0.Content = "recordNew"
- c.Side1.Content = "true"
- c.Operator = "="
- if !c.NewRecord.Bool {
- c.Operator = "<>"
- }
- } else if c.RoleId.Valid {
- c.Side0.Content = "role"
- c.Side1.Content = "true"
- c.Side0.RoleId = c.RoleId
- } else {
- if c.FieldId0.Valid {
- c.Side0.Content = "field"
- c.Side0.FieldId = c.FieldId0
-
- if c.Operator == "IS NULL" || c.Operator == "IS NOT NULL" {
- c.Side1.Content = "value"
- }
- }
- if c.FieldId1.Valid {
- c.Side1.Content = "field"
- c.Side1.FieldId = c.FieldId1
- }
- if c.Login1.Valid {
- c.Side1.Content = "login"
- }
- if c.PresetId1.Valid {
- c.Side1.Content = "preset"
- c.Side1.PresetId = c.PresetId1
- }
- if c.Value1.Valid && c.Value1.String != "" {
- c.Side1.Content = "value"
- c.Side1.Value = c.Value1
- }
- }
- return c
-}
-
-// < 2.6
-// fix empty 'open form' entity for fields
-func FixMissingOpenForm(formIdOpen pgtype.UUID, attributeIdRecord pgtype.UUID,
- oForm types.OpenForm) types.OpenForm {
-
- // legacy option was used
- if formIdOpen.Valid {
- return types.OpenForm{
- FormIdOpen: formIdOpen.Bytes,
- AttributeIdApply: attributeIdRecord,
- RelationIndex: 0,
- PopUp: false,
- MaxHeight: 0,
- MaxWidth: 0,
- }
- }
- return oForm
-}
diff --git a/compress/compress.go b/compress/compress.go
deleted file mode 100644
index 7c9d47dd..00000000
--- a/compress/compress.go
+++ /dev/null
@@ -1,69 +0,0 @@
-package compress
-
-import (
- "archive/zip"
- "io"
- "os"
- "path/filepath"
- "strings"
-)
-
-func Path(zipPath string, sourcePath string) error {
-
- zipFile, err := os.Create(zipPath)
- if err != nil {
- return err
- }
- defer zipFile.Close()
-
- zipWriter := zip.NewWriter(zipFile)
- defer zipWriter.Close()
-
- sourcePathInfo, err := os.Stat(sourcePath)
- if err != nil {
- return err
- }
-
- var baseDir string
- if sourcePathInfo.IsDir() {
- baseDir = filepath.Base(sourcePath)
- }
-
- filepath.Walk(sourcePath, func(path string, info os.FileInfo, err error) error {
- if err != nil {
- return err
- }
-
- // ignore directories themselves
- // included files have header paths which include their respective paths
- if info.IsDir() {
- return nil
- }
-
- header, err := zip.FileInfoHeader(info)
- if err != nil {
- return err
- }
-
- if baseDir != "" {
- // trim prefix to remove source path from file path inside zip
- header.Name = strings.Trim(strings.TrimPrefix(path, filepath.Clean(sourcePath)), "/\\")
- }
- header.Method = zip.Deflate
-
- writer, err := zipWriter.CreateHeader(header)
- if err != nil {
- return err
- }
-
- file, err := os.Open(path)
- if err != nil {
- return err
- }
- defer file.Close()
-
- _, err = io.Copy(writer, file)
- return err
- })
- return nil
-}
diff --git a/config/captionMap/captionMap.go b/config/captionMap/captionMap.go
new file mode 100644
index 00000000..ad75e8c1
--- /dev/null
+++ b/config/captionMap/captionMap.go
@@ -0,0 +1,315 @@
+package captionMap
+
+import (
+ "context"
+ "fmt"
+ "r3/schema/caption"
+ "r3/types"
+ "slices"
+
+ "github.com/gofrs/uuid"
+ "github.com/jackc/pgx/v5"
+ "github.com/jackc/pgx/v5/pgtype"
+)
+
+var captionMapTargets = []string{"app", "instance"}
+
+func Get_tx(ctx context.Context, tx pgx.Tx, id pgtype.UUID, target string) (types.CaptionMapsAll, error) {
+ var caps types.CaptionMapsAll
+
+ if !slices.Contains(captionMapTargets, target) {
+ return caps, fmt.Errorf("invalid target '%s' for caption map", target)
+ }
+
+ caps.ArticleIdMap = make(map[uuid.UUID]types.CaptionMap)
+ caps.AttributeIdMap = make(map[uuid.UUID]types.CaptionMap)
+ caps.ClientEventIdMap = make(map[uuid.UUID]types.CaptionMap)
+ caps.ColumnIdMap = make(map[uuid.UUID]types.CaptionMap)
+ caps.FieldIdMap = make(map[uuid.UUID]types.CaptionMap)
+ caps.FormIdMap = make(map[uuid.UUID]types.CaptionMap)
+ caps.FormActionIdMap = make(map[uuid.UUID]types.CaptionMap)
+ caps.JsFunctionIdMap = make(map[uuid.UUID]types.CaptionMap)
+ caps.LoginFormIdMap = make(map[uuid.UUID]types.CaptionMap)
+ caps.MenuIdMap = make(map[uuid.UUID]types.CaptionMap)
+ caps.MenuTabIdMap = make(map[uuid.UUID]types.CaptionMap)
+ caps.ModuleIdMap = make(map[uuid.UUID]types.CaptionMap)
+ caps.PgFunctionIdMap = make(map[uuid.UUID]types.CaptionMap)
+ caps.QueryChoiceIdMap = make(map[uuid.UUID]types.CaptionMap)
+ caps.RoleIdMap = make(map[uuid.UUID]types.CaptionMap)
+ caps.TabIdMap = make(map[uuid.UUID]types.CaptionMap)
+ caps.WidgetIdMap = make(map[uuid.UUID]types.CaptionMap)
+
+ sqlSelect := `SELECT CASE
+ WHEN article_id IS NOT NULL THEN 'article'
+ WHEN attribute_id IS NOT NULL THEN 'attribute'
+ WHEN client_event_id IS NOT NULL THEN 'clientEvent'
+ WHEN column_id IS NOT NULL THEN 'column'
+ WHEN field_id IS NOT NULL THEN 'field'
+ WHEN form_action_id IS NOT NULL THEN 'formAction'
+ WHEN form_id IS NOT NULL THEN 'form'
+ WHEN js_function_id IS NOT NULL THEN 'jsFunction'
+ WHEN login_form_id IS NOT NULL THEN 'loginForm'
+ WHEN menu_id IS NOT NULL THEN 'menu'
+ WHEN menu_tab_id IS NOT NULL THEN 'menuTab'
+ WHEN module_id IS NOT NULL THEN 'module'
+ WHEN pg_function_id IS NOT NULL THEN 'pgFunction'
+ WHEN query_choice_id IS NOT NULL THEN 'queryChoice'
+ WHEN role_id IS NOT NULL THEN 'role'
+ WHEN tab_id IS NOT NULL THEN 'tab'
+ WHEN widget_id IS NOT NULL THEN 'widget'
+ END AS entity,
+ COALESCE(
+ article_id,
+ attribute_id,
+ client_event_id,
+ column_id,
+ field_id,
+ form_id,
+ form_action_id,
+ js_function_id,
+ login_form_id,
+ menu_id,
+ menu_tab_id,
+ module_id,
+ pg_function_id,
+ query_choice_id,
+ role_id,
+ tab_id,
+ widget_id
+ ) AS entity_id,
+ content,
+ language_code,
+ value`
+
+ // fetch all or captions for a single module
+ var err error
+ var rows pgx.Rows
+ if !id.Valid {
+ rows, err = tx.Query(ctx, fmt.Sprintf(`%s FROM %s.caption`, sqlSelect, target))
+ } else {
+ rows, err = tx.Query(ctx, fmt.Sprintf(`
+ %s
+ FROM %s.caption
+ WHERE module_id = $1
+ OR attribute_id IN (
+ SELECT id FROM app.attribute WHERE relation_id IN (
+ SELECT id FROM app.relation WHERE module_id = $2
+ )
+ )
+ OR column_id IN (
+ SELECT id FROM app.column WHERE field_id IN (
+ SELECT id FROM app.field WHERE form_id IN (
+ SELECT id FROM app.form WHERE module_id = $3
+ )
+ )
+ OR collection_id IN (
+ SELECT id FROM app.collection WHERE module_id = $4
+ )
+ OR api_id IN (
+ SELECT id FROM app.api WHERE module_id = $5
+ )
+ )
+ OR field_id IN (
+ SELECT id FROM app.field WHERE form_id IN (
+ SELECT id FROM app.form WHERE module_id = $6
+ )
+ )
+ OR form_action_id IN (
+ SELECT id FROM app.form_action WHERE form_id IN (
+ SELECT id FROM app.form WHERE module_id = $7
+ )
+ )
+ OR menu_id IN (
+ SELECT id FROM app.menu WHERE menu_tab_id IN (
+ SELECT id FROM app.menu_tab WHERE module_id = $8
+ )
+ )
+ OR tab_id IN (
+ SELECT id FROM app.tab WHERE field_id IN (
+ SELECT id FROM app.field WHERE form_id IN (
+ SELECT id FROM app.form WHERE module_id = $9
+ )
+ )
+ )
+ OR query_choice_id IN (
+ SELECT id FROM app.query_choice WHERE query_id IN (
+ SELECT id FROM app.query
+ WHERE field_id IN (
+ SELECT id FROM app.field WHERE form_id IN (
+ SELECT id FROM app.form WHERE module_id = $10
+ )
+ )
+ -- only direct field queries have filter choices and therefore captions
+ -- most queries do not: form query, collection query, column sub query, filter sub query
+ )
+ )
+ OR article_id IN (SELECT id FROM app.article WHERE module_id = $11)
+ OR client_event_id IN (SELECT id FROM app.client_event WHERE module_id = $12)
+ OR form_id IN (SELECT id FROM app.form WHERE module_id = $13)
+ OR js_function_id IN (SELECT id FROM app.js_function WHERE module_id = $14)
+ OR login_form_id IN (SELECT id FROM app.login_form WHERE module_id = $15)
+ OR menu_tab_id IN (SELECT id FROM app.menu_tab WHERE module_id = $16)
+ OR pg_function_id IN (SELECT id FROM app.pg_function WHERE module_id = $17)
+ OR role_id IN (SELECT id FROM app.role WHERE module_id = $18)
+ OR widget_id IN (SELECT id FROM app.widget WHERE module_id = $19)
+ `, sqlSelect, target), id, id, id, id, id, id, id, id, id, id, id, id, id, id, id, id, id, id, id)
+ }
+
+ if err != nil {
+ return caps, err
+ }
+ defer rows.Close()
+
+ var content string
+ var entity string
+ var entityId uuid.UUID
+ var exists bool
+ var langCode string
+ var captionMap types.CaptionMap
+ var value string
+
+ for rows.Next() {
+ if err := rows.Scan(&entity, &entityId, &content, &langCode, &value); err != nil {
+ return caps, err
+ }
+
+ switch entity {
+ case "article":
+ captionMap, exists = caps.ArticleIdMap[entityId]
+ case "attribute":
+ captionMap, exists = caps.AttributeIdMap[entityId]
+ case "clientEvent":
+ captionMap, exists = caps.ClientEventIdMap[entityId]
+ case "column":
+ captionMap, exists = caps.ColumnIdMap[entityId]
+ case "field":
+ captionMap, exists = caps.FieldIdMap[entityId]
+ case "form":
+ captionMap, exists = caps.FormIdMap[entityId]
+ case "formAction":
+ captionMap, exists = caps.FormActionIdMap[entityId]
+ case "jsFunction":
+ captionMap, exists = caps.JsFunctionIdMap[entityId]
+ case "loginForm":
+ captionMap, exists = caps.LoginFormIdMap[entityId]
+ case "menu":
+ captionMap, exists = caps.MenuIdMap[entityId]
+ case "menuTab":
+ captionMap, exists = caps.MenuTabIdMap[entityId]
+ case "module":
+ captionMap, exists = caps.ModuleIdMap[entityId]
+ case "pgFunction":
+ captionMap, exists = caps.PgFunctionIdMap[entityId]
+ case "queryChoice":
+ captionMap, exists = caps.QueryChoiceIdMap[entityId]
+ case "role":
+ captionMap, exists = caps.RoleIdMap[entityId]
+ case "tab":
+ captionMap, exists = caps.TabIdMap[entityId]
+ case "widget":
+ captionMap, exists = caps.WidgetIdMap[entityId]
+ }
+
+ if !exists {
+ captionMap = caption.GetDefaultContent(entity)
+ }
+ captionMap[content][langCode] = value
+
+ switch entity {
+ case "article":
+ caps.ArticleIdMap[entityId] = captionMap
+ case "attribute":
+ caps.AttributeIdMap[entityId] = captionMap
+ case "clientEvent":
+ caps.ClientEventIdMap[entityId] = captionMap
+ case "column":
+ caps.ColumnIdMap[entityId] = captionMap
+ case "field":
+ caps.FieldIdMap[entityId] = captionMap
+ case "form":
+ caps.FormIdMap[entityId] = captionMap
+ case "formAction":
+ caps.FormActionIdMap[entityId] = captionMap
+ case "jsFunction":
+ caps.JsFunctionIdMap[entityId] = captionMap
+ case "loginForm":
+ caps.LoginFormIdMap[entityId] = captionMap
+ case "menu":
+ caps.MenuIdMap[entityId] = captionMap
+ case "menuTab":
+ caps.MenuTabIdMap[entityId] = captionMap
+ case "module":
+ caps.ModuleIdMap[entityId] = captionMap
+ case "pgFunction":
+ caps.PgFunctionIdMap[entityId] = captionMap
+ case "queryChoice":
+ caps.QueryChoiceIdMap[entityId] = captionMap
+ case "role":
+ caps.RoleIdMap[entityId] = captionMap
+ case "tab":
+ caps.TabIdMap[entityId] = captionMap
+ case "widget":
+ caps.WidgetIdMap[entityId] = captionMap
+ }
+ }
+ return caps, nil
+}
+
+func SetOne_tx(ctx context.Context, tx pgx.Tx, target string, entityId uuid.UUID,
+ content string, languageCode string, value string) error {
+
+ if !slices.Contains(captionMapTargets, target) {
+ return fmt.Errorf("invalid target '%s' for caption map", target)
+ }
+
+ entity, err := caption.GetEntityName(content)
+ if err != nil {
+ return err
+ }
+
+ // empty value, delete
+ if value == "" {
+ _, err := tx.Exec(ctx, fmt.Sprintf(`
+ DELETE FROM %s.caption
+ WHERE %s = $1
+ AND content = $2
+ AND language_code = $3
+ `, target, entity), entityId, content, languageCode)
+
+ return err
+ }
+
+ // insert or update
+ var exists bool
+ if err := tx.QueryRow(ctx, fmt.Sprintf(`
+ SELECT EXISTS (
+ SELECT 1
+ FROM %s.caption
+ WHERE %s = $1
+ AND content = $2
+ AND language_code = $3
+ )
+ `, target, entity), entityId, content, languageCode).Scan(&exists); err != nil {
+ return err
+ }
+
+ if !exists {
+ if _, err := tx.Exec(ctx, fmt.Sprintf(`
+ INSERT INTO %s.caption (%s, content, language_code, value)
+ VALUES ($1,$2,$3,$4)
+ `, target, entity), entityId, content, languageCode, value); err != nil {
+ return err
+ }
+ } else {
+ if _, err := tx.Exec(ctx, fmt.Sprintf(`
+ UPDATE %s.caption
+ SET value = $1
+ WHERE %s = $2
+ AND content = $3
+ AND language_code = $4
+ `, target, entity), value, entityId, content, languageCode); err != nil {
+ return err
+ }
+ }
+ return nil
+}
diff --git a/config/config.go b/config/config.go
index 2abc0961..af74482e 100644
--- a/config/config.go
+++ b/config/config.go
@@ -1,32 +1,32 @@
package config
import (
+ "context"
"encoding/json"
"math/rand"
"os"
- "r3/db"
"r3/log"
"r3/tools"
"r3/types"
"regexp"
+ "strconv"
"sync"
"github.com/gbrlsnchs/jwt/v3"
"github.com/gofrs/uuid"
+ "github.com/jackc/pgx/v5"
)
var (
- access_mx = &sync.Mutex{}
+ access_mx = &sync.RWMutex{}
// application names
appName string
appNameShort string
- // application version details (version syntax: major.minor.patch.build)
- // only major/minor version updates may effect the database
- appVersion string // full version of this application (1.2.0.1023)
- appVersionCut string // major+minor version of this application (1.2)
- appVersionBuild string // build counter of this application (1023)
+ // application versions
+ appVersion types.Version // r3
+ appVersionClient types.Version // r3 fat client
// configuration file location
filePath string // location of configuration file in JSON format
@@ -36,48 +36,124 @@ var (
File types.FileType
// operation data
- TokenSecret *jwt.HMACSHA
- License types.License = types.License{}
+ hostname string
+ license = types.License{}
+ tokenSecret *jwt.HMACSHA
+
+ // regex
+ rxVersionBuild = regexp.MustCompile(`^\d+\.\d+\.\d+\.`)
+ rxVersionCut = regexp.MustCompile(`\.\d+\.\d+$`)
)
-// returns
-// *full application version (1.2.0.1023)
-// *major+minor application version (1.2)
-// *build number (1023)
-// *database version (1.2), which is kept equal to major+minor app version
-func GetAppVersions() (string, string, string, string) {
- dbVersionCut := GetString("dbVersionCut")
+func GetAppVersion() types.Version {
+ access_mx.RLock()
+ defer access_mx.RUnlock()
+ return appVersion
+}
- access_mx.Lock()
- defer access_mx.Unlock()
- return appVersion, appVersionCut, appVersionBuild, dbVersionCut
+func GetAppVersionClient() types.Version {
+ access_mx.RLock()
+ defer access_mx.RUnlock()
+ return appVersionClient
}
+
func GetAppName() (string, string) {
+ access_mx.RLock()
+ defer access_mx.RUnlock()
return appName, appNameShort
}
func GetConfigFilepath() string {
+ access_mx.RLock()
+ defer access_mx.RUnlock()
return filePath
}
+func GetDbVersionCut() string {
+ return GetString("dbVersionCut")
+}
+func GetHostname() string {
+ access_mx.RLock()
+ defer access_mx.RUnlock()
+ return hostname
+}
+func GetLicense() types.License {
+ access_mx.RLock()
+ defer access_mx.RUnlock()
+ return license
+}
func GetLicenseActive() bool {
- return License.ValidUntil > tools.GetTimeUnix()
+ access_mx.RLock()
+ defer access_mx.RUnlock()
+ return license.ValidUntil > tools.GetTimeUnix()
+}
+func GetLicenseLoginCount(limitedLogins bool) int64 {
+ access_mx.RLock()
+ defer access_mx.RUnlock()
+
+ if limitedLogins {
+ return license.LoginCount * 3
+ }
+ return license.LoginCount
+}
+func GetLicenseUsed() bool {
+ access_mx.RLock()
+ defer access_mx.RUnlock()
+ return license.ValidUntil != 0
+}
+func GetLicenseValidUntil() int64 {
+ access_mx.RLock()
+ defer access_mx.RUnlock()
+ return license.ValidUntil
+}
+func GetTokenSecret() *jwt.HMACSHA {
+ access_mx.RLock()
+ defer access_mx.RUnlock()
+ return tokenSecret
}
// setters
-func SetAppVersion(version string) {
+func SetAppVersion(versionFull string, target string) error {
access_mx.Lock()
defer access_mx.Unlock()
- appVersion = version
- appVersionCut = regexp.MustCompile(`\.\d+\.\d+$`).ReplaceAllString(version, "")
- appVersionBuild = regexp.MustCompile(`^\d+\.\d+\.\d+\.`).ReplaceAllString(version, "")
+ build, err := strconv.Atoi(rxVersionBuild.ReplaceAllString(versionFull, ""))
+ if err != nil {
+ return err
+ }
+
+ if target == "service" {
+ appVersion.Build = build
+ appVersion.Cut = rxVersionCut.ReplaceAllString(versionFull, "")
+ appVersion.Full = versionFull
+ } else if target == "fatClient" {
+ appVersionClient.Build = build
+ appVersionClient.Cut = rxVersionCut.ReplaceAllString(versionFull, "")
+ appVersionClient.Full = versionFull
+ }
+ return nil
}
func SetAppName(name string, nameShort string) {
+ access_mx.Lock()
+ defer access_mx.Unlock()
appName = name
appNameShort = nameShort
}
func SetConfigFilePath(path string) {
+ access_mx.Lock()
+ defer access_mx.Unlock()
filePath = path
}
+func SetHostnameFromOs() error {
+ access_mx.Lock()
+ defer access_mx.Unlock()
+ var err error
+ hostname, err = os.Hostname()
+ return err
+}
+func SetLicense(l types.License) {
+ access_mx.Lock()
+ defer access_mx.Unlock()
+ license = l
+}
func SetLogLevels() {
log.SetLogLevel("api", int(GetUint64("logApi")))
log.SetLogLevel("backup", int(GetUint64("logBackup")))
@@ -93,8 +169,7 @@ func SetLogLevels() {
log.SetLogLevel("transfer", int(GetUint64("logTransfer")))
log.SetLogLevel("websocket", int(GetUint64("logWebsocket")))
}
-
-func SetInstanceIdIfEmpty() error {
+func SetInstanceIdIfEmpty_tx(ctx context.Context, tx pgx.Tx) error {
if GetString("instanceId") != "" {
return nil
}
@@ -103,17 +178,7 @@ func SetInstanceIdIfEmpty() error {
if err != nil {
return err
}
-
- tx, err := db.Pool.Begin(db.Ctx)
- if err != nil {
- return err
- }
- defer tx.Rollback(db.Ctx)
-
- if err := SetString_tx(tx, "instanceId", id.String()); err != nil {
- return err
- }
- return tx.Commit(db.Ctx)
+ return SetString_tx(ctx, tx, "instanceId", id.String())
}
// config file
@@ -158,33 +223,20 @@ func WriteFile() error {
}
// token
-func GetTokenSecret() *jwt.HMACSHA {
- access_mx.Lock()
- defer access_mx.Unlock()
-
- return TokenSecret
-}
-func ProcessTokenSecret() error {
+func ProcessTokenSecret_tx(ctx context.Context, tx pgx.Tx) error {
secret := GetString("tokenSecret")
if secret == "" {
- tx, err := db.Pool.Begin(db.Ctx)
- if err != nil {
- return err
- }
-
min, max := 32, 48
secret = tools.RandStringRunes(rand.Intn(max-min+1) + min)
- if err := SetString_tx(tx, "tokenSecret", secret); err != nil {
- tx.Rollback(db.Ctx)
+ if err := SetString_tx(ctx, tx, "tokenSecret", secret); err != nil {
return err
}
- tx.Commit(db.Ctx)
}
access_mx.Lock()
defer access_mx.Unlock()
- TokenSecret = jwt.NewHS256([]byte(secret))
+ tokenSecret = jwt.NewHS256([]byte(secret))
return nil
}
diff --git a/activation/activation.go b/config/config_activation.go
similarity index 87%
rename from activation/activation.go
rename to config/config_activation.go
index e46fe9ac..6f8fd5ad 100644
--- a/activation/activation.go
+++ b/config/config_activation.go
@@ -1,4 +1,4 @@
-package activation
+package config
import (
"crypto"
@@ -9,11 +9,15 @@ import (
"encoding/json"
"encoding/pem"
"errors"
- "r3/config"
+ "fmt"
"r3/log"
"r3/types"
+ "slices"
)
+// license revocations
+var revocations = []string{"LI00334231"}
+
// public key of OC2020_license_key, created 2020-10-05
var publicKey = `-----BEGIN RSA PUBLIC KEY-----
MIIICgKCCAEA0uKHJsK1xrhIQq7JRStnkWTjgn8qRZ0tgJbDIOKiteJlInfsXNkE
@@ -61,15 +65,18 @@ vCPF8QXc4V/wgJZtn6vdSXGR5W0dByItU5TLOlk6kLX4Aj6G8T+J//7NX5InD5Q/
7YPTU7NMcyC54h7EbTSPO8dQu0mQuo/dHEONCFaVEpaKVGYMY3Au8tUCAwEAAQ==
-----END RSA PUBLIC KEY-----`
-func SetLicense() {
- if config.GetString("licenseFile") == "" {
+func ActivateLicense() {
+ if GetString("licenseFile") == "" {
log.Info("server", "skipping activation check, no license installed")
+
+ // set empty in case license was removed
+ SetLicense(types.License{})
return
}
var licFile types.LicenseFile
- if err := json.Unmarshal([]byte(config.GetString("licenseFile")), &licFile); err != nil {
+ if err := json.Unmarshal([]byte(GetString("licenseFile")), &licFile); err != nil {
log.Error("server", "could not unmarshal license from config", err)
return
}
@@ -105,7 +112,13 @@ func SetLicense() {
return
}
+ // check if license has been revoked
+ if slices.Contains(revocations, licFile.License.LicenseId) {
+ log.Error("server", "failed to enable license", fmt.Errorf("license ID '%s' has been revoked", licFile.License.LicenseId))
+ return
+ }
+
// set license
log.Info("server", "setting license")
- config.License = licFile.License
+ SetLicense(licFile.License)
}
diff --git a/config/config_http.go b/config/config_http.go
new file mode 100644
index 00000000..d1fa12d2
--- /dev/null
+++ b/config/config_http.go
@@ -0,0 +1,36 @@
+package config
+
+import (
+ "crypto/tls"
+ "net/http"
+ "net/url"
+ "time"
+)
+
+var (
+ timeoutHandshake = time.Duration(5)
+)
+
+func GetHttpClient(skipVerify bool, timeoutHttp int64) (http.Client, error) {
+
+ tlsConfig := tls.Config{
+ InsecureSkipVerify: skipVerify,
+ PreferServerCipherSuites: true,
+ }
+ transport := &http.Transport{
+ TLSHandshakeTimeout: time.Second * time.Duration(timeoutHandshake),
+ TLSClientConfig: &tlsConfig,
+ }
+
+ if GetString("proxyUrl") != "" {
+ proxyUrl, err := url.Parse(GetString("proxyUrl"))
+ if err != nil {
+ return http.Client{}, err
+ }
+ transport.Proxy = http.ProxyURL(proxyUrl)
+ }
+ return http.Client{
+ Timeout: time.Second * time.Duration(timeoutHttp),
+ Transport: transport,
+ }, nil
+}
diff --git a/config/config_store.go b/config/config_store.go
index 6deecdec..abf2e470 100644
--- a/config/config_store.go
+++ b/config/config_store.go
@@ -2,6 +2,7 @@ package config
import (
"context"
+ "encoding/json"
"fmt"
"r3/db"
"r3/log"
@@ -12,14 +13,17 @@ import (
var (
// configuration store (with values from database)
- storeUint64 map[string]uint64 = make(map[string]uint64)
- storeString map[string]string = make(map[string]string)
-
- NamesString = []string{"appName", "appNameShort", "backupDir", "companyColorHeader",
- "companyColorLogin", "companyLogo", "companyLogoUrl", "companyName",
- "companyWelcome", "dbVersionCut", "defaultLanguageCode", "exportPrivateKey",
- "instanceId", "licenseFile", "publicHostName", "repoPass", "repoPublicKeys",
- "repoUrl", "repoUser", "tokenSecret", "updateCheckUrl", "updateCheckVersion"}
+ storeString = make(map[string]string)
+ storeUint64 = make(map[string]uint64)
+ storeUint64Slice = make(map[string][]uint64)
+
+ NamesString = []string{"adminMails", "appName", "appNameShort", "backupDir",
+ "companyColorHeader", "companyColorLogin", "companyLoginImage",
+ "companyLogo", "companyLogoUrl", "companyName", "companyWelcome", "css",
+ "dbVersionCut", "exportPrivateKey", "iconPwa1", "iconPwa2",
+ "instanceId", "licenseFile", "publicHostName", "proxyUrl", "repoPass",
+ "repoPublicKeys", "repoUrl", "repoUser", "systemMsgText", "tokenSecret",
+ "updateCheckUrl", "updateCheckVersion"}
NamesUint64 = []string{"backupDaily", "backupMonthly", "backupWeekly",
"backupCountDaily", "backupCountMonthly", "backupCountWeekly",
@@ -30,49 +34,64 @@ var (
"icsDaysPre", "icsDownload", "imagerThumbWidth", "logApi", "logBackup",
"logCache", "logCluster", "logCsv", "logImager", "logLdap", "logMail",
"logModule", "logServer", "logScheduler", "logTransfer", "logWebsocket",
- "logsKeepDays", "productionMode", "pwForceDigit", "pwForceLower",
- "pwForceSpecial", "pwForceUpper", "pwLengthMin", "schemaTimestamp",
- "repoChecked", "repoFeedback", "repoSkipVerify", "tokenExpiryHours"}
+ "logsKeepDays", "mailTrafficKeepDays", "productionMode", "pwForceDigit",
+ "pwForceLower", "pwForceSpecial", "pwForceUpper", "pwLengthMin",
+ "repoChecked", "repoFeedback", "repoSkipVerify", "systemMsgDate0",
+ "systemMsgDate1", "systemMsgMaintenance", "tokenExpiryHours",
+ "tokenKeepEnable"}
+
+ NamesUint64Slice = []string{"loginBackgrounds"}
)
// store setters
-func SetString_tx(tx pgx.Tx, name string, value string) error {
+func SetString_tx(ctx context.Context, tx pgx.Tx, name string, value string) error {
access_mx.Lock()
defer access_mx.Unlock()
if _, exists := storeString[name]; !exists {
return fmt.Errorf("configuration string value '%s' does not exist", name)
}
-
- if _, err := tx.Exec(context.Background(), `
- UPDATE instance.config SET value = $1 WHERE name = $2
- `, value, name); err != nil {
+ if err := writeToDb_tx(ctx, tx, name, value); err != nil {
return err
}
storeString[name] = value
return nil
}
-func SetUint64_tx(tx pgx.Tx, name string, value uint64) error {
+func SetUint64_tx(ctx context.Context, tx pgx.Tx, name string, value uint64) error {
access_mx.Lock()
defer access_mx.Unlock()
if _, exists := storeUint64[name]; !exists {
return fmt.Errorf("configuration uint64 value '%s' does not exist", name)
}
-
- if _, err := tx.Exec(context.Background(), `
- UPDATE instance.config SET value = $1 WHERE name = $2
- `, fmt.Sprintf("%d", value), name); err != nil {
+ if err := writeToDb_tx(ctx, tx, name, fmt.Sprintf("%d", value)); err != nil {
return err
}
storeUint64[name] = value
return nil
}
+func SetUint64Slice_tx(ctx context.Context, tx pgx.Tx, name string, value []uint64) error {
+ access_mx.Lock()
+ defer access_mx.Unlock()
+
+ if _, exists := storeUint64Slice[name]; !exists {
+ return fmt.Errorf("configuration uint64 slice value '%s' does not exist", name)
+ }
+ vJson, err := json.Marshal(value)
+ if err != nil {
+ return err
+ }
+ if err := writeToDb_tx(ctx, tx, name, string(vJson)); err != nil {
+ return err
+ }
+ storeUint64Slice[name] = value
+ return nil
+}
// store getters
func GetString(name string) string {
- access_mx.Lock()
- defer access_mx.Unlock()
+ access_mx.RLock()
+ defer access_mx.RUnlock()
if _, exists := storeString[name]; !exists {
log.Error("server", "configuration store get error",
@@ -83,8 +102,8 @@ func GetString(name string) string {
return storeString[name]
}
func GetUint64(name string) uint64 {
- access_mx.Lock()
- defer access_mx.Unlock()
+ access_mx.RLock()
+ defer access_mx.RUnlock()
if _, exists := storeUint64[name]; !exists {
log.Error("server", "configuration store get error",
@@ -94,20 +113,50 @@ func GetUint64(name string) uint64 {
}
return storeUint64[name]
}
+func GetUint64Slice(name string) []uint64 {
+ access_mx.RLock()
+ defer access_mx.RUnlock()
+
+ if _, exists := storeUint64Slice[name]; !exists {
+ log.Error("server", "configuration store get error",
+ fmt.Errorf("uint64 slice value '%s' does not exist", name))
+
+ return make([]uint64, 0)
+ }
+ return storeUint64Slice[name]
+}
func LoadFromDb() error {
+ ctx, ctxCanc := context.WithTimeout(context.Background(), db.CtxDefTimeoutSysTask)
+ defer ctxCanc()
+
+ tx, err := db.Pool.Begin(ctx)
+ if err != nil {
+ return err
+ }
+ defer tx.Rollback(ctx)
+
+ if err := LoadFromDb_tx(ctx, tx); err != nil {
+ return err
+ }
+ return tx.Commit(ctx)
+}
+func LoadFromDb_tx(ctx context.Context, tx pgx.Tx) error {
access_mx.Lock()
defer access_mx.Unlock()
// reset value stores
+ for _, name := range NamesString {
+ storeString[name] = ""
+ }
for _, name := range NamesUint64 {
storeUint64[name] = 0
}
- for _, name := range NamesString {
- storeString[name] = ""
+ for _, name := range NamesUint64Slice {
+ storeUint64Slice[name] = make([]uint64, 0)
}
- rows, err := db.Pool.Query(db.Ctx, "SELECT name, value FROM instance.config")
+ rows, err := tx.Query(ctx, "SELECT name, value FROM instance.config")
if err != nil {
return err
}
@@ -128,7 +177,21 @@ func LoadFromDb() error {
if err != nil {
return err
}
+ } else if _, exists := storeUint64Slice[name]; exists {
+ var v []uint64
+ if err := json.Unmarshal([]byte(value), &v); err != nil {
+ return err
+ }
+ storeUint64Slice[name] = v
}
}
return nil
}
+
+func writeToDb_tx(ctx context.Context, tx pgx.Tx, name string, value string) error {
+ _, err := tx.Exec(ctx, `
+ UPDATE instance.config SET value = $1 WHERE name = $2
+ `, value, name)
+
+ return err
+}
diff --git a/config/module_meta/module_meta.go b/config/module_meta/module_meta.go
new file mode 100644
index 00000000..6d98f1c7
--- /dev/null
+++ b/config/module_meta/module_meta.go
@@ -0,0 +1,111 @@
+package module_meta
+
+import (
+ "context"
+ "r3/types"
+
+ "github.com/gofrs/uuid"
+ "github.com/jackc/pgx/v5"
+)
+
+func Create_tx(ctx context.Context, tx pgx.Tx, moduleId uuid.UUID, hidden bool, owner bool, position int) error {
+
+ // module hash is updated after import transfer or on first version for new modules
+ _, err := tx.Exec(ctx, `
+ INSERT INTO instance.module_meta (module_id, hidden, owner, position, date_change, hash)
+ VALUES ($1,$2,$3,$4,EXTRACT(EPOCH FROM NOW()),'00000000000000000000000000000000000000000000')
+ `, moduleId, hidden, owner, position)
+ return err
+}
+
+func Get_tx(ctx context.Context, tx pgx.Tx, moduleId uuid.UUID) (types.ModuleMeta, error) {
+ var m = types.ModuleMeta{
+ Id: moduleId,
+ }
+
+ err := tx.QueryRow(ctx, `
+ SELECT hidden, owner, position, date_change, languages_custom
+ FROM instance.module_meta
+ WHERE module_id = $1
+ `, moduleId).Scan(&m.Hidden, &m.Owner, &m.Position, &m.DateChange, &m.LanguagesCustom)
+
+ if m.LanguagesCustom == nil {
+ m.LanguagesCustom = make([]string, 0)
+ }
+ return m, err
+}
+func GetIdMap_tx(ctx context.Context, tx pgx.Tx) (map[uuid.UUID]types.ModuleMeta, error) {
+ moduleIdMap := make(map[uuid.UUID]types.ModuleMeta)
+
+ rows, err := tx.Query(ctx, `
+ SELECT module_id, hidden, owner, position, date_change, languages_custom
+ FROM instance.module_meta
+ `)
+ if err != nil {
+ return moduleIdMap, err
+ }
+ defer rows.Close()
+
+ for rows.Next() {
+ var m types.ModuleMeta
+ if err := rows.Scan(&m.Id, &m.Hidden, &m.Owner, &m.Position, &m.DateChange, &m.LanguagesCustom); err != nil {
+ return moduleIdMap, err
+ }
+ if m.LanguagesCustom == nil {
+ m.LanguagesCustom = make([]string, 0)
+ }
+ moduleIdMap[m.Id] = m
+ }
+ return moduleIdMap, nil
+}
+func GetHash_tx(ctx context.Context, tx pgx.Tx, moduleId uuid.UUID) (string, error) {
+ var hash string
+ err := tx.QueryRow(ctx, `
+ SELECT hash
+ FROM instance.module_meta
+ WHERE module_id = $1
+ `, moduleId).Scan(&hash)
+ return hash, err
+}
+func GetOwner_tx(ctx context.Context, tx pgx.Tx, moduleId uuid.UUID) (bool, error) {
+ var isOwner bool
+ err := tx.QueryRow(ctx, `
+ SELECT owner
+ FROM instance.module_meta
+ WHERE module_id = $1
+ `, moduleId).Scan(&isOwner)
+ return isOwner, err
+}
+
+func SetDateChange_tx(ctx context.Context, tx pgx.Tx, moduleIds []uuid.UUID, date int64) error {
+ _, err := tx.Exec(ctx, `
+ UPDATE instance.module_meta
+ SET date_change = $2
+ WHERE module_id = ANY($1)
+ `, moduleIds, date)
+ return err
+}
+func SetHash_tx(ctx context.Context, tx pgx.Tx, moduleId uuid.UUID, hash string) error {
+ _, err := tx.Exec(ctx, `
+ UPDATE instance.module_meta
+ SET hash = $1
+ WHERE module_id = $2
+ `, hash, moduleId)
+ return err
+}
+func SetLanguagesCustom_tx(ctx context.Context, tx pgx.Tx, moduleId uuid.UUID, languages []string) error {
+ _, err := tx.Exec(ctx, `
+ UPDATE instance.module_meta
+ SET languages_custom = $1
+ WHERE module_id = $2
+ `, languages, moduleId)
+ return err
+}
+func SetOptions_tx(ctx context.Context, tx pgx.Tx, moduleId uuid.UUID, hidden bool, owner bool, position int) error {
+ _, err := tx.Exec(ctx, `
+ UPDATE instance.module_meta
+ SET hidden = $1, owner = $2, position = $3
+ WHERE module_id = $4
+ `, hidden, owner, position, moduleId)
+ return err
+}
diff --git a/config_dedicated.json b/config_dedicated.json
index 76c05953..f6af3dc5 100644
--- a/config_dedicated.json
+++ b/config_dedicated.json
@@ -10,11 +10,14 @@
"user": "app",
"pass": "app",
"ssl": false,
- "sslSkipVerify": false
+ "sslSkipVerify": false,
+ "connsMax": 0,
+ "connsMin": 0
},
+ "mirror": false,
"paths": {
"certificates": "data/certificates/",
- "embeddedDbBin": "pgsql/bin/",
+ "embeddedDbBin": "pgsql16/bin/",
"embeddedDbData": "data/database/",
"files": "data/files/",
"temp": "data/temp/",
@@ -25,6 +28,7 @@
"cert": "cert.crt",
"key": "cert.key",
"listen": "0.0.0.0",
- "port": 443
+ "port": 443,
+ "tlsMinVersion":"1.2"
}
}
\ No newline at end of file
diff --git a/config_portable.json b/config_portable.json
index bd82e8f9..f631ffa1 100644
--- a/config_portable.json
+++ b/config_portable.json
@@ -10,11 +10,14 @@
"user": "app",
"pass": "app",
"ssl": false,
- "sslSkipVerify": false
+ "sslSkipVerify": false,
+ "connsMax": 0,
+ "connsMin": 0
},
+ "mirror": false,
"paths": {
"certificates": "data/certificates/",
- "embeddedDbBin": "pgsql/bin/",
+ "embeddedDbBin": "pgsql16/bin/",
"embeddedDbData": "data/database/",
"files": "data/files/",
"temp": "data/temp/",
@@ -25,6 +28,7 @@
"cert": "cert.crt",
"key": "cert.key",
"listen": "0.0.0.0",
- "port": 443
+ "port": 0,
+ "tlsMinVersion":"1.2"
}
}
\ No newline at end of file
diff --git a/config_template.json b/config_template.json
index f1a1d5a2..192b1a43 100644
--- a/config_template.json
+++ b/config_template.json
@@ -10,11 +10,14 @@
"user": "app",
"pass": "app",
"ssl": false,
- "sslSkipVerify": false
+ "sslSkipVerify": false,
+ "connsMax": 0,
+ "connsMin": 0
},
+ "mirror": false,
"paths": {
"certificates": "data/certificates/",
- "embeddedDbBin": "pgsql/bin/",
+ "embeddedDbBin": "pgsql16/bin/",
"embeddedDbData": "data/database/",
"files": "data/files/",
"temp": "data/temp/",
@@ -25,6 +28,7 @@
"cert": "cert.crt",
"key": "cert.key",
"listen": "0.0.0.0",
- "port": 443
+ "port": 443,
+ "tlsMinVersion":"1.2"
}
}
\ No newline at end of file
diff --git a/data/data.go b/data/data.go
index 4ebab457..7f0cf4a0 100644
--- a/data/data.go
+++ b/data/data.go
@@ -6,8 +6,8 @@ import (
"r3/cache"
"r3/handler"
"r3/schema"
- "r3/tools"
"r3/types"
+ "slices"
"strings"
"github.com/gofrs/uuid"
@@ -42,7 +42,7 @@ func authorizedAttribute(loginId int64, attributeId uuid.UUID, requestedAccess i
}
// check whether access to relation is authorized
-// cases: creating or deleting relation tupels
+// cases: creating or deleting relation tuples
func authorizedRelation(loginId int64, relationId uuid.UUID, requestedAccess int) bool {
access, err := cache.GetAccessById(loginId)
@@ -76,7 +76,7 @@ func getPolicyFunctionNames(loginId int64, policies []types.RelationPolicy, acti
for _, p := range policies {
// ignore if login does not have role
- if !tools.UuidInSlice(p.RoleId, access.RoleIds) {
+ if !slices.Contains(access.RoleIds, p.RoleId) {
continue
}
diff --git a/data/data_del.go b/data/data_del.go
index f291e5a4..2b9247f9 100644
--- a/data/data_del.go
+++ b/data/data_del.go
@@ -31,7 +31,7 @@ func Del_tx(ctx context.Context, tx pgx.Tx, relationId uuid.UUID,
// check for protected preset record
for _, preset := range rel.Presets {
if preset.Protected && cache.GetPresetRecordId(preset.Id) == recordId {
- return handler.CreateErrCode("APP", handler.ErrCodeAppPresetProtected)
+ return handler.CreateErrCode(handler.ErrContextApp, handler.ErrCodeAppPresetProtected)
}
}
diff --git a/data/data_enc/data_enc.go b/data/data_enc/data_enc.go
index 80bf105d..54dadd55 100644
--- a/data/data_enc/data_enc.go
+++ b/data/data_enc/data_enc.go
@@ -41,9 +41,12 @@ func SetKeys_tx(ctx context.Context, tx pgx.Tx, relationId uuid.UUID,
return nil
}
+ // ignore existing, we cannot guarantee that only non-existing keys are inserted
+ // primary key is record_id + login_id
if _, err := tx.Prepare(ctx, "store_keys", fmt.Sprintf(`
INSERT INTO instance_e2ee."%s" (record_id, login_id, key_enc)
VALUES ($1,$2,$3)
+ ON CONFLICT (record_id,login_id) DO NOTHING
`, schema.GetEncKeyTableName(relationId))); err != nil {
return err
}
diff --git a/data/data_file.go b/data/data_file.go
index 6239d422..2ddafff6 100644
--- a/data/data_file.go
+++ b/data/data_file.go
@@ -10,9 +10,9 @@ import (
"path/filepath"
"r3/cache"
"r3/config"
+ "r3/data/data_image"
"r3/db"
"r3/handler"
- "r3/image"
"r3/schema"
"r3/tools"
"r3/types"
@@ -62,10 +62,7 @@ func GetFilePathVersion(fileId uuid.UUID, version int64) string {
}
// attempts to store file upload
-func SetFile(loginId int64, attributeId uuid.UUID, fileId uuid.UUID,
- part *multipart.Part, isNewFile bool) error {
-
- var err error
+func SetFile(ctx context.Context, loginId int64, attributeId uuid.UUID, fileId uuid.UUID, part *multipart.Part, isNewFile bool) error {
cache.Schema_mx.RLock()
attribute, exists := cache.AttributeIdMap[attributeId]
@@ -84,7 +81,7 @@ func SetFile(loginId int64, attributeId uuid.UUID, fileId uuid.UUID,
var recordIds []int64
var version int64 = 0
if !isNewFile {
- if err := db.Pool.QueryRow(db.Ctx, fmt.Sprintf(`
+ if err := db.Pool.QueryRow(ctx, fmt.Sprintf(`
SELECT v.version+1, (
SELECT ARRAY_AGG(r.record_id)
FROM instance_file."%s" AS r
@@ -138,34 +135,33 @@ func SetFile(loginId int64, attributeId uuid.UUID, fileId uuid.UUID,
}
// create/update thumbnail - failure should not block progress
- image.CreateThumbnail(fileId, filepath.Ext(part.FileName()), filePath,
+ data_image.CreateThumbnail(fileId, filepath.Ext(part.FileName()), filePath,
GetFilePathThumb(fileId), false)
// store file meta data in database
- tx, err := db.Pool.Begin(db.Ctx)
+ tx, err := db.Pool.Begin(ctx)
if err != nil {
return err
}
+ defer tx.Rollback(ctx)
- if err := FileApplyVersion_tx(db.Ctx, tx, isNewFile, attributeId,
+ if err := FileApplyVersion_tx(ctx, tx, isNewFile, attributeId,
attribute.RelationId, fileId, hash, part.FileName(),
fileSizeKb, version, recordIds, loginId); err != nil {
- tx.Rollback(db.Ctx)
return err
}
- return tx.Commit(db.Ctx)
+ return tx.Commit(ctx)
}
// stores database changes for uploaded/updated files
-func FileApplyVersion_tx(ctx context.Context, tx pgx.Tx, isNewFile bool,
- attributeId uuid.UUID, relationId uuid.UUID, fileId uuid.UUID, fileHash string,
- fileName string, fileSizeKb int64, fileVersion int64, recordIds []int64,
- loginId int64) error {
+func FileApplyVersion_tx(ctx context.Context, tx pgx.Tx, isNewFile bool, attributeId uuid.UUID,
+ relationId uuid.UUID, fileId uuid.UUID, fileHash string, fileName string,
+ fileSizeKb int64, fileVersion int64, recordIds []int64, loginId int64) error {
if isNewFile {
// store file reference
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
INSERT INTO instance.file (id, ref_counter) VALUES ($1,0)
`, fileId); err != nil {
return err
@@ -177,7 +173,7 @@ func FileApplyVersion_tx(ctx context.Context, tx pgx.Tx, isNewFile bool,
Int32: int32(loginId),
Valid: loginId != -1,
}
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
INSERT INTO instance.file_version (
file_id,version,login_id,hash,size_kb,date_change)
VALUES ($1,$2,$3,$4,$5,$6)
@@ -204,12 +200,12 @@ func FileApplyVersion_tx(ctx context.Context, tx pgx.Tx, isNewFile bool,
}
logAttributes := []types.DataSetAttribute{
- types.DataSetAttribute{
+ {
AttributeId: attributeId,
AttributeIdNm: pgtype.UUID{},
Value: types.DataSetFileChanges{
FileIdMapChange: map[uuid.UUID]types.DataSetFileChange{
- fileId: types.DataSetFileChange{
+ fileId: {
Action: "update",
Name: fileName,
Version: fileVersion,
@@ -222,7 +218,7 @@ func FileApplyVersion_tx(ctx context.Context, tx pgx.Tx, isNewFile bool,
logValuesOld := []interface{}{nil}
for _, recordId := range recordIds {
- if err := setLog_tx(db.Ctx, tx, relationId, logAttributes,
+ if err := setLog_tx(ctx, tx, relationId, logAttributes,
logAttributeFileIndexes, false, logValuesOld, recordId,
loginId); err != nil {
@@ -362,8 +358,11 @@ func FilesSetDeletedForRecord_tx(ctx context.Context, tx pgx.Tx,
}
func FileGetLatestVersion(fileId uuid.UUID) (int64, error) {
+ ctx, ctxCanc := context.WithTimeout(context.Background(), db.CtxDefTimeoutSysTask)
+ defer ctxCanc()
+
var version int64
- err := db.Pool.QueryRow(db.Ctx, `
+ err := db.Pool.QueryRow(ctx, `
SELECT MAX(version)
FROM instance.file_version
WHERE file_id = $1
diff --git a/data/data_file_copy.go b/data/data_file_copy.go
index a123e764..4f2b5f6f 100644
--- a/data/data_file_copy.go
+++ b/data/data_file_copy.go
@@ -1,16 +1,17 @@
package data
import (
+ "context"
"fmt"
- "r3/db"
"r3/schema"
"r3/tools"
"r3/types"
"github.com/gofrs/uuid"
+ "github.com/jackc/pgx/v5"
)
-func CopyFiles(loginId int64, srcAttributeId uuid.UUID, srcFileIds []uuid.UUID,
+func CopyFiles_tx(ctx context.Context, tx pgx.Tx, loginId int64, srcAttributeId uuid.UUID, srcFileIds []uuid.UUID,
srcRecordId int64, dstAttributeId uuid.UUID) ([]types.DataGetValueFile, error) {
files := make([]types.DataGetValueFile, 0)
@@ -23,7 +24,7 @@ func CopyFiles(loginId int64, srcAttributeId uuid.UUID, srcFileIds []uuid.UUID,
return files, err
}
- rows, err := db.Pool.Query(db.Ctx, fmt.Sprintf(`
+ rows, err := tx.Query(ctx, fmt.Sprintf(`
SELECT v.file_id, r.name, v.version, v.hash, v.size_kb, v.date_change
FROM instance.file_version AS v
JOIN instance_file."%s" AS r
@@ -82,26 +83,7 @@ func CopyFiles(loginId int64, srcAttributeId uuid.UUID, srcFileIds []uuid.UUID,
// insert every successfully created file immediately
// (to have the reference to clean in case of issues)
- tx, err := db.Pool.Begin(db.Ctx)
- if err != nil {
- return files, err
- }
-
- if _, err := tx.Exec(db.Ctx, `
- INSERT INTO instance.file (id, ref_counter) VALUES ($1,0)
- `, idNew); err != nil {
- tx.Rollback(db.Ctx)
- return files, err
- }
- if _, err := tx.Exec(db.Ctx, `
- INSERT INTO instance.file_version (
- file_id, version, login_id, hash, size_kb, date_change)
- VALUES ($1,$2,$3,$4,$5,$6)
- `, idNew, 0, loginId, f.Hash, f.Size, f.Changed); err != nil {
- tx.Rollback(db.Ctx)
- return files, err
- }
- if err := tx.Commit(db.Ctx); err != nil {
+ if err := copyFilesRef(ctx, tx, idNew, loginId, f); err != nil {
return files, err
}
@@ -111,3 +93,20 @@ func CopyFiles(loginId int64, srcAttributeId uuid.UUID, srcFileIds []uuid.UUID,
}
return files, nil
}
+
+func copyFilesRef(ctx context.Context, tx pgx.Tx, idNew uuid.UUID, loginId int64, f types.DataGetValueFile) error {
+ if _, err := tx.Exec(ctx, `
+ INSERT INTO instance.file (id, ref_counter) VALUES ($1,0)
+ `, idNew); err != nil {
+ return err
+ }
+
+ if _, err := tx.Exec(ctx, `
+ INSERT INTO instance.file_version (
+ file_id, version, login_id, hash, size_kb, date_change)
+ VALUES ($1,$2,$3,$4,$5,$6)
+ `, idNew, 0, loginId, f.Hash, f.Size, f.Changed); err != nil {
+ return err
+ }
+ return nil
+}
diff --git a/data/data_get.go b/data/data_get.go
index a0cda2ce..2e306d23 100644
--- a/data/data_get.go
+++ b/data/data_get.go
@@ -9,15 +9,14 @@ import (
"r3/data/data_sql"
"r3/handler"
"r3/schema"
- "r3/tools"
"r3/types"
"regexp"
+ "slices"
"strconv"
"strings"
"github.com/gofrs/uuid"
"github.com/jackc/pgx/v5"
- "github.com/jackc/pgx/v5/pgtype"
)
var regexRelId = regexp.MustCompile(`^\_r(\d+)id`) // finds: _r3id
@@ -36,12 +35,9 @@ func Get_tx(ctx context.Context, tx pgx.Tx, data types.DataGet, loginId int64,
results := make([]types.DataGetResult, 0) // data GET results
queryArgs := make([]interface{}, 0) // SQL arguments for data query
queryCount := "" // SQL query to retrieve a total count
- queryCountArgs := make([]interface{}, 0) // SQL query arguments for count (potentially less, no expressions besides COUNT)
// prepare SQL query for data GET request
- *query, queryCount, err = prepareQuery(data, indexRelationIds,
- &queryArgs, &queryCountArgs, loginId, 0)
-
+ *query, queryCount, err = prepareQuery(data, indexRelationIds, &queryArgs, loginId, 0)
if err != nil {
return results, 0, err
}
@@ -59,8 +55,8 @@ func Get_tx(ctx context.Context, tx pgx.Tx, data types.DataGet, loginId int64,
return results, 0, err
}
- indexRecordIds := make(map[int]interface{}) // ID for each relation tupel by index
- indexRecordEncKeys := make(map[int]string) // encrypted key for each relation tupel by index
+ indexRecordIds := make(map[int]interface{}) // ID for each relation tuple by index
+ indexRecordEncKeys := make(map[int]string) // encrypted key for each relation tuple by index
values := make([]interface{}, 0) // final values for selected attributes
// collect values for expressions
@@ -68,7 +64,7 @@ func Get_tx(ctx context.Context, tx pgx.Tx, data types.DataGet, loginId int64,
values = append(values, valuesAll[i])
}
- // collect relation tupel IDs
+ // collect relation tuple IDs
// relation ID columns start after expressions
for i, j := len(data.Expressions), len(columns); i < j; i++ {
@@ -176,15 +172,15 @@ func Get_tx(ctx context.Context, tx pgx.Tx, data types.DataGet, loginId int64,
}
// deny DEL if record ID is in blacklist or not in whitelist (if used)
- if tools.Int64InSlice(recordId, indexMapDelBlacklist[index]) ||
- (indexMapDelWhitelistUsed[index] && !tools.Int64InSlice(recordId, indexMapDelWhitelist[index])) {
+ if slices.Contains(indexMapDelBlacklist[index], recordId) ||
+ (indexMapDelWhitelistUsed[index] && !slices.Contains(indexMapDelWhitelist[index], recordId)) {
results[i].IndexesPermNoDel = append(results[i].IndexesPermNoDel, index)
}
// deny SET if record ID is in blacklist or not in whitelist (if used)
- if tools.Int64InSlice(recordId, indexMapSetBlacklist[index]) ||
- (indexMapSetWhitelistUsed[index] && !tools.Int64InSlice(recordId, indexMapSetWhitelist[index])) {
+ if slices.Contains(indexMapSetBlacklist[index], recordId) ||
+ (indexMapSetWhitelistUsed[index] && !slices.Contains(indexMapSetWhitelist[index], recordId)) {
results[i].IndexesPermNoSet = append(results[i].IndexesPermNoSet, index)
}
@@ -208,7 +204,7 @@ func Get_tx(ctx context.Context, tx pgx.Tx, data types.DataGet, loginId int64,
return results, 0, handler.ErrSchemaUnknownAttribute(expr.AttributeId.Bytes)
}
- if !atr.Encrypted || tools.IntInSlice(expr.Index, relationIndexesEnc) {
+ if !atr.Encrypted || slices.Contains(relationIndexesEnc, expr.Index) {
continue
}
relationIndexesEnc = append(relationIndexesEnc, expr.Index)
@@ -228,8 +224,7 @@ func Get_tx(ctx context.Context, tx pgx.Tx, data types.DataGet, loginId int64,
case int64:
recordIds = append(recordIds, v)
default:
- return results, 0, handler.CreateErrCode("SEC",
- handler.ErrCodeSecDataKeysNotAvailable)
+ return results, 0, handler.CreateErrCode(handler.ErrContextSec, handler.ErrCodeSecDataKeysNotAvailable)
}
}
}
@@ -242,8 +237,7 @@ func Get_tx(ctx context.Context, tx pgx.Tx, data types.DataGet, loginId int64,
}
if len(encKeys) != len(recordIds) {
- return results, 0, handler.CreateErrCode("SEC",
- handler.ErrCodeSecDataKeysNotAvailable)
+ return results, 0, handler.CreateErrCode(handler.ErrContextSec, handler.ErrCodeSecDataKeysNotAvailable)
}
// assign record keys in order
@@ -261,7 +255,7 @@ func Get_tx(ctx context.Context, tx pgx.Tx, data types.DataGet, loginId int64,
if data.Limit != 0 && (count >= data.Limit || data.Offset != 0) {
// defined limit has been reached or offset was used, get total count
- if err := tx.QueryRow(ctx, queryCount, queryCountArgs...).Scan(&count); err != nil {
+ if err := tx.QueryRow(ctx, queryCount, queryArgs...).Scan(&count); err != nil {
return results, 0, err
}
}
@@ -272,8 +266,7 @@ func Get_tx(ctx context.Context, tx pgx.Tx, data types.DataGet, loginId int64,
// also used for sub queries, a nesting level is included for separation (0 = main query)
// returns data + count SQL query strings
func prepareQuery(data types.DataGet, indexRelationIds map[int]uuid.UUID,
- queryArgs *[]interface{}, queryCountArgs *[]interface{}, loginId int64,
- nestingLevel int) (string, string, error) {
+ queryArgs *[]interface{}, loginId int64, nestingLevel int) (string, string, error) {
// check for authorized access, READ(1) for GET
for _, expr := range data.Expressions {
@@ -301,10 +294,6 @@ func prepareQuery(data types.DataGet, indexRelationIds map[int]uuid.UUID,
return "", "", handler.ErrSchemaUnknownModule(rel.ModuleId)
}
- // define relation code for source relation
- // source relation might have index != 0 (for GET from joined relation)
- relCode := getRelationCode(data.IndexSource, nestingLevel)
-
// add relations as joins via relationship attributes
indexRelationIds[data.IndexSource] = data.RelationId
for _, join := range data.Joins {
@@ -312,32 +301,23 @@ func prepareQuery(data types.DataGet, indexRelationIds map[int]uuid.UUID,
continue
}
- if err := addJoin(indexRelationIds, join, &inJoin, loginId, nestingLevel); err != nil {
+ line, err := getQueryJoin(indexRelationIds, join, getFiltersByIndex(data.Filters, join.Index), queryArgs, loginId, nestingLevel)
+ if err != nil {
return "", "", err
}
+ inJoin = append(inJoin, line)
}
// add filters from data GET query
// before expressions because these are excluded from 'total count' query and can contain sub query filters
// SQL arguments are numbered ($1, $2, ...) with no way to skip any (? placeholder is not allowed);
// excluded sub queries arguments from expressions causes missing argument numbers
- for i, filter := range data.Filters {
-
- // overwrite first filter connector and add brackets in first and last filter line
- // done so that query filters do not interfere with other filters
- if i == 0 {
- filter.Connector = "AND"
- filter.Side0.Brackets++
- }
- if i == len(data.Filters)-1 {
- filter.Side1.Brackets++
- }
-
- if err := addWhere(filter, queryArgs, queryCountArgs,
- loginId, &inWhere, nestingLevel); err != nil {
-
+ for _, filter := range getFiltersByIndex(data.Filters, 0) {
+ line, err := getQueryWhere(filter, queryArgs, loginId, nestingLevel)
+ if err != nil {
return "", "", err
}
+ inWhere = append(inWhere, line)
}
// add filter for base relation policy if applicable
@@ -369,18 +349,9 @@ func prepareQuery(data types.DataGet, indexRelationIds map[int]uuid.UUID,
// non-attribute expression
if !expr.AttributeId.Valid {
-
- // in expressions of main query, disable SQL arguments for count query
- // count query has no sub queries with arguments and only 1 expression: COUNT(*)
- queryCountArgsOptional := queryCountArgs
- if nestingLevel == 0 {
- queryCountArgsOptional = nil
- }
indexRelationIdsSub := make(map[int]uuid.UUID)
- subQuery, _, err := prepareQuery(expr.Query, indexRelationIdsSub,
- queryArgs, queryCountArgsOptional, loginId, nestingLevel+1)
-
+ subQuery, _, err := prepareQuery(expr.Query, indexRelationIdsSub, queryArgs, loginId, nestingLevel+1)
if err != nil {
return "", "", err
}
@@ -392,9 +363,11 @@ func prepareQuery(data types.DataGet, indexRelationIds map[int]uuid.UUID,
}
// attribute expression
- if err := addSelect(pos, expr, indexRelationIds, &inSelect, nestingLevel); err != nil {
+ line, err := getQuerySelect(pos, expr, nestingLevel)
+ if err != nil {
return "", "", err
}
+ inSelect = append(inSelect, line)
if expr.Aggregator.Valid {
mapIndex_agg[expr.Index] = true
@@ -404,7 +377,7 @@ func prepareQuery(data types.DataGet, indexRelationIds map[int]uuid.UUID,
}
}
- // add expressions for relation tupel IDs after attributes (on main query)
+ // add expressions for relation tuple IDs after attributes (on main query)
if nestingLevel == 0 {
for index, _ := range indexRelationIds {
@@ -418,7 +391,7 @@ func prepareQuery(data types.DataGet, indexRelationIds map[int]uuid.UUID,
inSelect = append(inSelect, fmt.Sprintf(`"%s"."%s" AS %s`,
getRelationCode(index, nestingLevel),
schema.PkName,
- getTupelIdCode(index, nestingLevel)))
+ getTupleIdCode(index, nestingLevel)))
}
}
@@ -433,9 +406,9 @@ func prepareQuery(data types.DataGet, indexRelationIds map[int]uuid.UUID,
// group by record ID if record must be kept during aggregation
if expr.Aggregator.String == "record" {
- relId := getTupelIdCode(expr.Index, nestingLevel)
+ relId := getTupleIdCode(expr.Index, nestingLevel)
- if !tools.StringInSlice(relId, groupByItems) {
+ if !slices.Contains(groupByItems, relId) {
groupByItems = append(groupByItems, relId)
}
}
@@ -450,7 +423,7 @@ func prepareQuery(data types.DataGet, indexRelationIds map[int]uuid.UUID,
}
// build ORDER BY
- queryOrder, err := addOrderBy(data, nestingLevel)
+ queryOrder, err := getQueryLineOrderBy(data, nestingLevel)
if err != nil {
return "", "", err
}
@@ -464,6 +437,10 @@ func prepareQuery(data types.DataGet, indexRelationIds map[int]uuid.UUID,
queryOffset = fmt.Sprintf("\nOFFSET %d", data.Offset)
}
+ // define relation code for source relation
+ // source relation might have index != 0 (for GET from joined relation)
+ relCode := getRelationCode(data.IndexSource, nestingLevel)
+
// build final data retrieval SQL query
query := fmt.Sprintf(
`SELECT %s`+"\n"+
@@ -480,16 +457,14 @@ func prepareQuery(data types.DataGet, indexRelationIds map[int]uuid.UUID,
// build final total count SQL query (not relevant for sub queries)
queryCount := ""
if nestingLevel == 0 {
-
- // distinct to keep count for source relation records correct independent of joins
queryCount = fmt.Sprintf(
- `SELECT COUNT(DISTINCT "%s"."%s")`+"\n"+
- `FROM "%s"."%s" AS "%s" %s%s`,
- getRelationCode(data.IndexSource, nestingLevel), schema.PkName, // SELECT
- mod.Name, rel.Name, relCode, // FROM
+ `SELECT COUNT(*) FROM (SELECT %s`+"\n"+
+ `FROM "%s"."%s" AS "%s" %s%s%s) AS q`,
+ strings.Join(inSelect, `, `), // SELECT
+ mod.Name, rel.Name, relCode, // FROM
strings.Join(inJoin, ""), // JOINS
- queryWhere) // WHERE
-
+ queryWhere, // WHERE
+ queryGroup) // GROUP BY
}
// add intendation for nested sub queries
@@ -501,23 +476,20 @@ func prepareQuery(data types.DataGet, indexRelationIds map[int]uuid.UUID,
}
// add SELECT for attribute in given relation index
-// if attribute is from another relation than given index (relationship), attribute value = tupel IDs in relation with given index via given attribute
+// if attribute is from another relation than given index (relationship), attribute value = tuple IDs in relation with given index via given attribute
// 'outside in' is important in cases of self reference, where direction cannot be ascertained by attribute
-func addSelect(exprPos int, expr types.DataGetExpression,
- indexRelationIds map[int]uuid.UUID, inSelect *[]string, nestingLevel int) error {
+func getQuerySelect(exprPos int, expr types.DataGetExpression, nestingLevel int) (string, error) {
relCode := getRelationCode(expr.Index, nestingLevel)
atr, exists := cache.AttributeIdMap[expr.AttributeId.Bytes]
if !exists {
- return handler.ErrSchemaUnknownAttribute(expr.AttributeId.Bytes)
+ return "", handler.ErrSchemaUnknownAttribute(expr.AttributeId.Bytes)
}
- alias := data_sql.GetExpressionAlias(exprPos)
-
if schema.IsContentFiles(atr.Content) {
// attribute is files attribute
- *inSelect = append(*inSelect, fmt.Sprintf(`(
+ return fmt.Sprintf(`(
SELECT ARRAY_TO_JSON(ARRAY_AGG(ROW_TO_JSON(t)))
FROM (
SELECT r.file_id AS id, r.name, COALESCE(v.hash,'') AS hash,
@@ -532,32 +504,24 @@ func addSelect(exprPos int, expr types.DataGetExpression,
)
WHERE r.record_id = "%s"."%s"
AND r.date_delete IS NULL
- ) AS t)`, schema.GetFilesTableName(atr.Id), relCode, schema.PkName))
- return nil
+ ) AS t)`, schema.GetFilesTableName(atr.Id), relCode, schema.PkName), nil
}
+ alias := data_sql.GetExpressionAlias(exprPos)
+
if !expr.OutsideIn {
// attribute is from index relation
- code, err := getAttributeCode(expr.AttributeId.Bytes, relCode)
- if err != nil {
- return err
- }
- *inSelect = append(*inSelect, data_sql.GetExpression(expr, code, alias))
- return nil
+ return data_sql.GetExpression(expr, getAttributeCode(relCode, atr.Name), alias), nil
}
// attribute comes via relationship from other relation (or self reference from same relation)
shipRel, exists := cache.RelationIdMap[atr.RelationId]
if !exists {
- return handler.ErrSchemaUnknownRelation(atr.RelationId)
- }
-
- shipMod, exists := cache.ModuleIdMap[shipRel.ModuleId]
- if !exists {
- return handler.ErrSchemaUnknownModule(shipRel.ModuleId)
+ return "", handler.ErrSchemaUnknownRelation(atr.RelationId)
}
+ shipMod := cache.ModuleIdMap[shipRel.ModuleId]
- // get tupel IDs from other relation
+ // get tuple IDs from other relation
if !expr.AttributeIdNm.Valid {
var selectExpr string
@@ -568,8 +532,8 @@ func addSelect(exprPos int, expr types.DataGetExpression,
selectExpr = fmt.Sprintf(`JSON_AGG("%s")`, schema.PkName)
}
- // from other relation, collect tupel IDs in relationship with given index tupel
- *inSelect = append(*inSelect, fmt.Sprintf(`(
+ // from other relation, collect tuple IDs in relationship with given index tuple
+ return fmt.Sprintf(`(
SELECT %s
FROM "%s"."%s"
WHERE "%s"."%s" = "%s"."%s"
@@ -577,39 +541,37 @@ func addSelect(exprPos int, expr types.DataGetExpression,
selectExpr,
shipMod.Name, shipRel.Name,
shipRel.Name, atr.Name, relCode, schema.PkName,
- alias))
+ alias), nil
- } else {
- shipAtrNm, exists := cache.AttributeIdMap[expr.AttributeIdNm.Bytes]
- if !exists {
- return errors.New("attribute does not exist")
- }
-
- // from other relation, collect tupel IDs from n:m relationship attribute
- *inSelect = append(*inSelect, fmt.Sprintf(`(
- SELECT JSON_AGG("%s")
- FROM "%s"."%s"
- WHERE "%s"."%s" = "%s"."%s"
- ) AS %s`,
- shipAtrNm.Name,
- shipMod.Name, shipRel.Name,
- shipRel.Name, atr.Name, relCode, schema.PkName,
- alias))
}
- return nil
+
+ shipAtrNm, exists := cache.AttributeIdMap[expr.AttributeIdNm.Bytes]
+ if !exists {
+ return "", errors.New("attribute does not exist")
+ }
+
+ // from other relation, collect tuple IDs from n:m relationship attribute
+ return fmt.Sprintf(`(
+ SELECT JSON_AGG("%s")
+ FROM "%s"."%s"
+ WHERE "%s"."%s" = "%s"."%s"
+ ) AS %s`,
+ shipAtrNm.Name,
+ shipMod.Name, shipRel.Name,
+ shipRel.Name, atr.Name, relCode, schema.PkName,
+ alias), nil
}
-func addJoin(indexRelationIds map[int]uuid.UUID, join types.DataGetJoin,
- inJoin *[]string, loginId int64, nestingLevel int) error {
+func getQueryJoin(indexRelationIds map[int]uuid.UUID, join types.DataGetJoin, filters []types.DataGetFilter,
+ queryArgs *[]interface{}, loginId int64, nestingLevel int) (string, error) {
// check join attribute
atr, exists := cache.AttributeIdMap[join.AttributeId]
if !exists {
- return errors.New("join attribute does not exist")
+ return "", errors.New("join attribute does not exist")
}
-
if !atr.RelationshipId.Valid {
- return errors.New("relationship of attribute is invalid")
+ return "", errors.New("relationship of attribute is invalid")
}
// is join attribute on source relation? (direction of relationship)
@@ -637,17 +599,13 @@ func addJoin(indexRelationIds map[int]uuid.UUID, join types.DataGetJoin,
// check other relation and corresponding module
relTarget, exists := cache.RelationIdMap[relIdTarget]
if !exists {
- return handler.ErrSchemaUnknownRelation(relIdTarget)
- }
-
- modTarget, exists := cache.ModuleIdMap[relTarget.ModuleId]
- if !exists {
- return handler.ErrSchemaUnknownModule(relTarget.ModuleId)
+ return "", handler.ErrSchemaUnknownRelation(relIdTarget)
}
+ modTarget := cache.ModuleIdMap[relTarget.ModuleId]
// define JOIN type
- if !tools.StringInSlice(join.Connector, types.QueryJoinConnectors) {
- return errors.New("invalid join type")
+ if !slices.Contains(types.QueryJoinConnectors, join.Connector) {
+ return "", errors.New("invalid join type")
}
// apply filter policy to JOIN if applicable
@@ -655,44 +613,49 @@ func addJoin(indexRelationIds map[int]uuid.UUID, join types.DataGetJoin,
getRelationCode(join.Index, nestingLevel), relTarget.Policies)
if err != nil {
- return err
+ return "", err
}
- *inJoin = append(*inJoin, fmt.Sprintf("\n"+`%s JOIN "%s"."%s" AS "%s" ON "%s"."%s" = "%s"."%s" %s`,
+ // parse join filters
+ inWhere := make([]string, 0)
+ for _, filter := range filters {
+ line, err := getQueryWhere(filter, queryArgs, loginId, nestingLevel)
+ if err != nil {
+ return "", err
+ }
+ inWhere = append(inWhere, line)
+ }
+
+ return fmt.Sprintf("\n"+`%s JOIN "%s"."%s" AS "%s" ON "%s"."%s" = "%s"."%s" %s%s`,
join.Connector, modTarget.Name, relTarget.Name, relCodeTarget,
relCodeFrom, atr.Name,
relCodeTo, schema.PkName,
- policyFilter))
-
- return nil
+ policyFilter, strings.Join(inWhere, "")), nil
}
// parses filters to generate query lines and arguments
-func addWhere(filter types.DataGetFilter, queryArgs *[]interface{},
- queryCountArgs *[]interface{}, loginId int64, inWhere *[]string,
- nestingLevel int) error {
+func getQueryWhere(filter types.DataGetFilter, queryArgs *[]interface{}, loginId int64, nestingLevel int) (string, error) {
- if !tools.StringInSlice(filter.Connector, types.QueryFilterConnectors) {
- return errors.New("bad filter connector")
+ if !slices.Contains(types.QueryFilterConnectors, filter.Connector) {
+ return "", errors.New("bad filter connector")
}
- if !tools.StringInSlice(filter.Operator, types.QueryFilterOperators) {
- return errors.New("bad filter operator")
+ if !slices.Contains(types.QueryFilterOperators, filter.Operator) {
+ return "", errors.New("bad filter operator")
}
+ // check for full text search
+ ftsActive := filter.Side0.FtsDict.Valid || filter.Side1.FtsDict.Valid
isNullOp := isNullOperator(filter.Operator)
// define comparisons
var getComp = func(s types.DataGetFilterSide, comp *string) error {
- var err error
var isQuery = s.Query.RelationId != uuid.Nil
// sub query filter
if isQuery {
indexRelationIdsSub := make(map[int]uuid.UUID)
- subQuery, _, err := prepareQuery(s.Query, indexRelationIdsSub,
- queryArgs, queryCountArgs, loginId, nestingLevel+1)
-
+ subQuery, _, err := prepareQuery(s.Query, indexRelationIdsSub, queryArgs, loginId, nestingLevel+1)
if err != nil {
return err
}
@@ -702,23 +665,47 @@ func addWhere(filter types.DataGetFilter, queryArgs *[]interface{},
// attribute filter
if s.AttributeId.Valid {
- *comp, err = getAttributeCode(s.AttributeId.Bytes,
- getRelationCode(s.AttributeIndex, s.AttributeNested))
- if err != nil {
- return err
+ atr, exists := cache.AttributeIdMap[s.AttributeId.Bytes]
+ if !exists {
+ return handler.ErrSchemaUnknownAttribute(s.AttributeId.Bytes)
}
+ *comp = getAttributeCode(getRelationCode(s.AttributeIndex, s.AttributeNested), atr.Name)
+
+ if ftsActive {
+ ftsDict := "'simple'"
+
+ // use dictionary attribute on corresponding text index, if there is one
+ rel := cache.RelationIdMap[atr.RelationId]
+ for _, ind := range rel.Indexes {
+ if ind.Method == "GIN" &&
+ len(ind.Attributes) == 1 &&
+ ind.Attributes[0].AttributeId == s.AttributeId.Bytes &&
+ ind.AttributeIdDict.Valid {
+
+ // use dictionary attribute name without quotes as its a column
+ atrDict := cache.AttributeIdMap[ind.AttributeIdDict.Bytes]
+ ftsDict = atrDict.Name
+ break
+ }
+ }
- // special case: (I)LIKE comparison needs attribute cast as TEXT
- // this is relevant for integers/floats/etc.
- if isLikeOperator(filter.Operator) {
- *comp = fmt.Sprintf("%s::TEXT", *comp)
+ // apply ts_vector operation with or without dictionary definition
+ *comp = fmt.Sprintf("to_tsvector(CASE WHEN %s IS NULL THEN 'simple'::REGCONFIG ELSE %s END,%s)",
+ ftsDict, ftsDict, *comp)
+ } else {
+ // special cases
+ // (I)LIKE comparison needs attribute cast as TEXT (relevant for integers/floats/etc.)
+ // REGCONFIG attributes must be cast as TEXT
+ if isLikeOperator(filter.Operator) || atr.Content == "regconfig" {
+ *comp = fmt.Sprintf("%s::TEXT", *comp)
+ }
}
return nil
}
- // user value filter
- // can be anything, text, numbers, floats, boolean, NULL values
+ // fixed value filter
+ // can be anything, text, floats, boolean, NULL values
// create placeholders and add to query arguments
if isNullOp {
@@ -737,59 +724,72 @@ func addWhere(filter types.DataGetFilter, queryArgs *[]interface{},
}
if isLikeOperator(filter.Operator) {
- if v, ok := s.Value.(pgtype.Text); ok {
- s.Value = v.String
- }
- // special syntax for (I)LIKE comparison (add wildcard characters)
- s.Value = fmt.Sprintf("%%%s%%", s.Value)
- }
-
- // PGX fix: cannot use proper true/false values in SQL parameters
- // no good solution found so far, error: 'cannot convert (true|false) to Text'
- if fmt.Sprintf("%T", s.Value) == "bool" {
- if s.Value.(bool) == true {
- s.Value = "true"
+ // add wildcard characters before/after for (I)LIKE comparison unless input includes them
+ v := fmt.Sprintf("%s", s.Value)
+ if strings.Contains(v, "%") {
+ s.Value = v
} else {
- s.Value = "false"
+ s.Value = fmt.Sprintf("%%%s%%", v)
}
}
*queryArgs = append(*queryArgs, s.Value)
- if queryCountArgs != nil {
- *queryCountArgs = append(*queryCountArgs, s.Value)
- }
- *comp = fmt.Sprintf("$%d", len(*queryArgs))
+ if s.FtsDict.Valid {
+ if !cache.GetSearchDictionaryIsValid(s.FtsDict.String) {
+ s.FtsDict.String = "simple"
+ }
+
+ // websearch_to_tsquery supports
+ // AND logic: 'coffee tea' results in: 'coffe' & 'tea'
+ // OR logic: 'coffee or tea' results in: 'coffe' | 'tea'
+ // negation: 'coffee -tea' results in: 'coffe' & !'tea'
+ // followed: '"coffe tea"' results in: 'coffe' <-> 'tea'
+ // https://www.postgresql.org/docs/current/textsearch-controls.html
+ *comp = fmt.Sprintf("websearch_to_tsquery('%s',$%d)", s.FtsDict.String, len(*queryArgs))
+ } else {
+ // cast args for certain data types, known issues:
+ // * uncast bool args cannot be compared to another uncast bool arg via equal operator (=)
+ // * uncast real/double args cannot be compared to another uncast real/double arg via equal operator (=)
+ argCast := ""
+ if s.Value != nil {
+ switch fmt.Sprintf("%T", s.Value) {
+ case "bool":
+ argCast = "::BOOL"
+ case "float64":
+ argCast = "::FLOAT8" // short alias to double precision, float64 is default coming from JSON decode of JS number values
+ }
+ }
+ *comp = fmt.Sprintf("$%d%s", len(*queryArgs), argCast)
+ }
return nil
}
// build left/right comparison sides (ignore right side, if NULL operator)
comp0, comp1 := "", ""
if err := getComp(filter.Side0, &comp0); err != nil {
- return err
+ return "", err
}
if !isNullOp {
if err := getComp(filter.Side1, &comp1); err != nil {
- return err
+ return "", err
}
- // array operator, add round brackets to right side comparison
+ // array operator, add round brackets around right side comparison
if isArrayOperator(filter.Operator) {
comp1 = fmt.Sprintf("(%s)", comp1)
}
}
// generate WHERE line from parsed filter definition
- *inWhere = append(*inWhere, fmt.Sprintf("\n%s %s%s %s %s%s",
+ return fmt.Sprintf("\n%s %s%s %s %s%s",
filter.Connector,
getBrackets(filter.Side0.Brackets, false),
comp0, filter.Operator, comp1,
- getBrackets(filter.Side1.Brackets, true)))
-
- return nil
+ getBrackets(filter.Side1.Brackets, true)), nil
}
-func addOrderBy(data types.DataGet, nestingLevel int) (string, error) {
+func getQueryLineOrderBy(data types.DataGet, nestingLevel int) (string, error) {
if len(data.Orders) == 0 {
return "", nil
@@ -797,7 +797,6 @@ func addOrderBy(data types.DataGet, nestingLevel int) (string, error) {
orderItems := make([]string, len(data.Orders))
var alias string
- var err error
for i, ord := range data.Orders {
@@ -819,12 +818,12 @@ func addOrderBy(data types.DataGet, nestingLevel int) (string, error) {
if expressionPosAlias != -1 {
alias = data_sql.GetExpressionAlias(expressionPosAlias)
} else {
- alias, err = getAttributeCode(ord.AttributeId.Bytes,
- getRelationCode(int(ord.Index.Int32), nestingLevel))
-
- if err != nil {
- return "", err
+ atr, exists := cache.AttributeIdMap[ord.AttributeId.Bytes]
+ if !exists {
+ return "", handler.ErrSchemaUnknownAttribute(ord.AttributeId.Bytes)
}
+
+ alias = getAttributeCode(getRelationCode(int(ord.Index.Int32), nestingLevel), atr.Name)
}
} else if ord.ExpressionPos.Valid {
@@ -856,20 +855,16 @@ func getRelationCode(relationIndex int, nestingLevel int) string {
return fmt.Sprintf("_r%d_l%d", relationIndex, nestingLevel)
}
-// tupel IDs are uniquely identified by the relation code + the fixed string 'id'
-func getTupelIdCode(relationIndex int, nestingLevel int) string {
+// tuple IDs are uniquely identified by the relation code + the fixed string 'id'
+func getTupleIdCode(relationIndex int, nestingLevel int) string {
return fmt.Sprintf("%sid", getRelationCode(relationIndex, nestingLevel))
}
// an attribute is referenced by the relation code + the attribute name
// due to the relation code, this will always uniquely identify an attribute from a specific index
// example: _r3.surname maps to person.surname from index 3
-func getAttributeCode(attributeId uuid.UUID, relCode string) (string, error) {
- atr, exists := cache.AttributeIdMap[attributeId]
- if !exists {
- return "", handler.ErrSchemaUnknownAttribute(attributeId)
- }
- return fmt.Sprintf(`"%s"."%s"`, relCode, atr.Name), nil
+func getAttributeCode(relationCode string, attributeName string) string {
+ return fmt.Sprintf(`"%s"."%s"`, relationCode, attributeName)
}
func getBrackets(count int, right bool) string {
@@ -883,20 +878,38 @@ func getBrackets(count int, right bool) string {
}
out := ""
- for count > 0 {
+ for ; count > 0; count-- {
out += bracketChar
- count--
}
return fmt.Sprintf("%s", out)
}
+func getFiltersByIndex(filters []types.DataGetFilter, index int) []types.DataGetFilter {
+ out := make([]types.DataGetFilter, 0)
+
+ for _, filter := range filters {
+ if filter.Index == index {
+ out = append(out, filter)
+ }
+ }
+
+ // overwrite first filter connector and add brackets in first and last filter line
+ // so that query filters do not interfere with other filters
+ if len(out) != 0 {
+ out[0].Connector = "AND"
+ out[0].Side0.Brackets++
+ out[len(out)-1].Side1.Brackets++
+ }
+ return out
+}
+
// operator types
func isArrayOperator(operator string) bool {
- return tools.StringInSlice(operator, []string{"= ANY", "<> ALL"})
+ return slices.Contains([]string{"= ANY", "<> ALL"}, operator)
}
func isLikeOperator(operator string) bool {
- return tools.StringInSlice(operator, []string{"LIKE", "ILIKE", "NOT LIKE", "NOT ILIKE"})
+ return slices.Contains([]string{"LIKE", "ILIKE", "NOT LIKE", "NOT ILIKE"}, operator)
}
func isNullOperator(operator string) bool {
- return tools.StringInSlice(operator, []string{"IS NULL", "IS NOT NULL"})
+ return slices.Contains([]string{"IS NULL", "IS NOT NULL"}, operator)
}
diff --git a/image/image.go b/data/data_image/data_image.go
similarity index 92%
rename from image/image.go
rename to data/data_image/data_image.go
index 58069b4c..4701491a 100644
--- a/image/image.go
+++ b/data/data_image/data_image.go
@@ -1,9 +1,10 @@
-package image
+package data_image
import (
"bufio"
"errors"
"fmt"
+ "net/http"
"os"
"os/exec"
"r3/config"
@@ -192,3 +193,19 @@ func processFile(fileId uuid.UUID, ext string, src string, dst string) {
return
}
}
+
+func detectType(filePath string) (string, error) {
+ file, err := os.Open(filePath)
+ if err != nil {
+ return "", err
+ }
+ defer file.Close()
+
+ // read first 512 bytes to detect content type
+ // http://golang.org/pkg/net/http/#DetectContentType
+ fileBytes := make([]byte, 512)
+ if _, err := file.Read(fileBytes); err != nil {
+ return "", err
+ }
+ return http.DetectContentType(fileBytes), nil
+}
diff --git a/image/image_linux.go b/data/data_image/data_image_linux.go
similarity index 92%
rename from image/image_linux.go
rename to data/data_image/data_image_linux.go
index 530ddd8f..daa3da36 100644
--- a/image/image_linux.go
+++ b/data/data_image/data_image_linux.go
@@ -1,6 +1,6 @@
//go:build !windows
-package image
+package data_image
import "os/exec"
diff --git a/image/image_windows.go b/data/data_image/data_image_windows.go
similarity index 94%
rename from image/image_windows.go
rename to data/data_image/data_image_windows.go
index e00163ec..7ea584d5 100644
--- a/image/image_windows.go
+++ b/data/data_image/data_image_windows.go
@@ -1,6 +1,6 @@
//go:build windows
-package image
+package data_image
import (
"r3/tools"
diff --git a/data/data_import/data_import.go b/data/data_import/data_import.go
index 4ccdb5e1..3f9fa018 100644
--- a/data/data_import/data_import.go
+++ b/data/data_import/data_import.go
@@ -7,9 +7,10 @@ import (
"r3/cache"
"r3/data"
"r3/handler"
+ "r3/log"
"r3/schema"
- "r3/tools"
"r3/types"
+ "slices"
"strings"
"github.com/gofrs/uuid"
@@ -115,7 +116,7 @@ func FromInterfaceValues_tx(ctx context.Context, tx pgx.Tx, loginId int64,
for i := 0; i < attempts; i++ {
for _, join := range joins {
- dataSet, _ := dataSetsByIndex[join.Index]
+ dataSet := dataSetsByIndex[join.Index]
if dataSet.RecordId != 0 {
continue // record already looked up
@@ -126,7 +127,7 @@ func FromInterfaceValues_tx(ctx context.Context, tx pgx.Tx, loginId int64,
continue // no unique PG index defined, nothing to do
}
- if tools.IntInSlice(join.Index, indexesResolved) {
+ if slices.Contains(indexesResolved, join.Index) {
continue // lookup already done
}
@@ -135,7 +136,7 @@ func FromInterfaceValues_tx(ctx context.Context, tx pgx.Tx, loginId int64,
for _, pgIndexAtrId := range pgIndexAtrIds {
- pgIndexAtr, _ := cache.AttributeIdMap[pgIndexAtrId]
+ pgIndexAtr := cache.AttributeIdMap[pgIndexAtrId]
if !schema.IsContentRelationship(pgIndexAtr.Content) {
// PG index attribute is non-relationship, can directly be used
@@ -214,25 +215,39 @@ func FromInterfaceValues_tx(ctx context.Context, tx pgx.Tx, loginId int64,
}
}
- // apply join create/update restrictions after resolving unique indexes
+ // go through to be created/updated records after resolving unique indexes
for _, join := range joins {
+ dataSet := dataSetsByIndex[join.Index]
+ newRecord := dataSet.RecordId == 0
+ badNulls := false
+
+ // check for not nullable attributes for which values are set to NULL
+ // only on joins != -1, as primary record should throw an error if it cannot be imported
+ if join.IndexFrom != -1 {
+ for _, setAtr := range dataSet.Attributes {
+ atr := cache.AttributeIdMap[setAtr.AttributeId]
+
+ if !atr.Nullable && setAtr.Value == nil {
+ rel := cache.RelationIdMap[atr.RelationId]
+ log.Info("csv", fmt.Sprintf("skips record on relation '%s', no value set for required attribute '%s'",
+ rel.Name, atr.Name))
+
+ badNulls = true
+ break
+ }
+ }
+ }
- if !join.ApplyUpdate && dataSetsByIndex[join.Index].RecordId != 0 {
-
- // existing record but must not update
+ if newRecord && (badNulls || !join.ApplyCreate) {
+ // new record cannot or must not be created (required attribute values are NULL or join setting)
+ // remove entire data SET - if it does not exist and won´t be created, it cannot be used as relationship either
+ delete(dataSetsByIndex, join.Index)
+ }
+ if !newRecord && (badNulls || !join.ApplyUpdate) {
+ // existing record, but cannot or must not be updated (required attribute values are NULL or join setting)
// remove attribute values - still keep record itself for updating relationship attributes where allowed
- dataSet := dataSetsByIndex[join.Index]
dataSet.Attributes = make([]types.DataSetAttribute, 0)
dataSetsByIndex[join.Index] = dataSet
- continue
- }
-
- if !join.ApplyCreate && dataSetsByIndex[join.Index].RecordId == 0 {
-
- // new record but must not create
- // remove entire data SET - if it does not exist and must not be created, it cannot be used as relationship either
- delete(dataSetsByIndex, join.Index)
- continue
}
}
diff --git a/data/data_log.go b/data/data_log.go
index 91da1172..0d435069 100644
--- a/data/data_log.go
+++ b/data/data_log.go
@@ -9,6 +9,7 @@ import (
"r3/handler"
"r3/tools"
"r3/types"
+ "slices"
"github.com/gofrs/uuid"
"github.com/jackc/pgx/v5"
@@ -17,6 +18,14 @@ import (
// delete data change logs according to retention settings
func DelLogsBackground() error {
+ ctx, ctxCanc := context.WithTimeout(context.Background(), db.CtxDefTimeoutDbTask)
+ defer ctxCanc()
+
+ tx, err := db.Pool.Begin(ctx)
+ if err != nil {
+ return err
+ }
+ defer tx.Rollback(ctx)
cache.Schema_mx.RLock()
defer cache.Schema_mx.RUnlock()
@@ -26,7 +35,7 @@ func DelLogsBackground() error {
// delete logs for relations with no retention
if !r.RetentionCount.Valid && !r.RetentionDays.Valid {
- if _, err := db.Pool.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
DELETE FROM instance.data_log
WHERE id IN (
SELECT data_log_id
@@ -46,7 +55,7 @@ func DelLogsBackground() error {
// delete logs according to retention settings
now := tools.GetTimeUnix()
- if _, err := db.Pool.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
DELETE FROM instance.data_log AS p
WHERE p.relation_id = $1
@@ -66,7 +75,7 @@ func DelLogsBackground() error {
return err
}
}
- return nil
+ return tx.Commit(ctx)
}
// get data change logs for specified record and attributes
@@ -83,9 +92,10 @@ func GetLogs_tx(ctx context.Context, tx pgx.Tx, recordId int64,
}
rows, err := tx.Query(ctx, `
- SELECT d.id, d.relation_id, l.name, d.date_change
+ SELECT d.id, d.relation_id, d.date_change, l.name, lm.name_display
FROM instance.data_log as d
- LEFT JOIN instance.login AS l ON l.id = d.login_id_wofk
+ LEFT JOIN instance.login AS l ON l.id = d.login_id_wofk
+ LEFT JOIN instance.login_meta AS lm ON lm.login_id = l.id
WHERE d.record_id_wofk = $1
AND d.id IN (
SELECT data_log_id
@@ -101,12 +111,16 @@ func GetLogs_tx(ctx context.Context, tx pgx.Tx, recordId int64,
for rows.Next() {
var l types.DataLog
var name pgtype.Text
+ var nameDisplay pgtype.Text
- if err := rows.Scan(&l.Id, &l.RelationId, &name, &l.DateChange); err != nil {
+ if err := rows.Scan(&l.Id, &l.RelationId, &l.DateChange, &name, &nameDisplay); err != nil {
return logs, err
}
l.RecordId = recordId
l.LoginName = name.String
+ if nameDisplay.Valid && nameDisplay.String != "" {
+ l.LoginName = nameDisplay.String
+ }
logs = append(logs, l)
}
rows.Close()
@@ -178,7 +192,7 @@ func setLog_tx(ctx context.Context, tx pgx.Tx, relationId uuid.UUID,
// special case: file attributes
// new value is always delta (file uploaded/removed/etc.), old value is not needed
- if tools.IntInSlice(i, fileAttributeIndexes) {
+ if slices.Contains(fileAttributeIndexes, i) {
if atr.Value == nil {
continue
}
diff --git a/data/data_query/data_query.go b/data/data_query/data_query.go
index d6281f03..91267d07 100644
--- a/data/data_query/data_query.go
+++ b/data/data_query/data_query.go
@@ -2,14 +2,15 @@ package data_query
import (
"r3/cache"
- "r3/tools"
"r3/types"
+ "slices"
"time"
"github.com/jackc/pgx/v5/pgtype"
)
-func ConvertColumnToExpression(column types.Column, loginId int64, languageCode string) types.DataGetExpression {
+func ConvertColumnToExpression(column types.Column, loginId int64, languageCode string,
+ getterKeyMapValue map[string]string) types.DataGetExpression {
expr := types.DataGetExpression{
AttributeId: pgtype.UUID{Bytes: column.AttributeId, Valid: true},
@@ -28,38 +29,33 @@ func ConvertColumnToExpression(column types.Column, loginId int64, languageCode
RelationId: column.Query.RelationId.Bytes,
Joins: ConvertQueryToDataJoins(column.Query.Joins),
Expressions: []types.DataGetExpression{expr},
- Filters: ConvertQueryToDataFilter(column.Query.Filters, loginId, languageCode),
+ Filters: ConvertQueryToDataFilter(column.Query.Filters, loginId, languageCode, getterKeyMapValue),
Orders: ConvertQueryToDataOrders(column.Query.Orders),
Limit: column.Query.FixedLimit,
},
}
}
-func ConvertSubQueryToDataGet(query types.Query, queryAggregator pgtype.Text,
- attributeId pgtype.UUID, attributeIndex int, loginId int64,
- languageCode string) types.DataGet {
+func ConvertSubQueryToDataGet(query types.Query, queryAggregator pgtype.Text, attributeId pgtype.UUID,
+ attributeIndex int, loginId int64, languageCode string, getterKeyMapValue map[string]string) types.DataGet {
return types.DataGet{
RelationId: query.RelationId.Bytes,
Joins: ConvertQueryToDataJoins(query.Joins),
- Expressions: []types.DataGetExpression{
- types.DataGetExpression{
- Aggregator: queryAggregator,
- AttributeId: attributeId,
- AttributeIdNm: pgtype.UUID{},
- Index: attributeIndex,
- },
- },
- Filters: ConvertQueryToDataFilter(query.Filters, loginId, languageCode),
+ Expressions: []types.DataGetExpression{{
+ Aggregator: queryAggregator,
+ AttributeId: attributeId,
+ AttributeIdNm: pgtype.UUID{},
+ Index: attributeIndex,
+ }},
+ Filters: ConvertQueryToDataFilter(query.Filters, loginId, languageCode, getterKeyMapValue),
Orders: ConvertQueryToDataOrders(query.Orders),
Limit: query.FixedLimit,
}
}
-func ConvertQueryToDataFilter(filters []types.QueryFilter,
- loginId int64, languageCode string) []types.DataGetFilter {
-
- filtersOut := make([]types.DataGetFilter, len(filters))
+func ConvertQueryToDataFilter(filters []types.QueryFilter, loginId int64,
+ languageCode string, getterKeyMapValue map[string]string) []types.DataGetFilter {
var processSide = func(side types.QueryFilterSide) types.DataGetFilterSide {
sideOut := types.DataGetFilterSide{
@@ -72,12 +68,20 @@ func ConvertQueryToDataFilter(filters []types.QueryFilter,
Value: side.Value,
}
switch side.Content {
+ // API
+ case "getter":
+ if value, ok := getterKeyMapValue[side.Value.String]; ok {
+ sideOut.Value = value
+ } else {
+ sideOut.Value = nil
+ }
+
// data
case "preset":
sideOut.Value = cache.GetPresetRecordId(side.PresetId.Bytes)
case "subQuery":
sideOut.Query = ConvertSubQueryToDataGet(side.Query, side.QueryAggregator,
- side.AttributeId, side.AttributeIndex, loginId, languageCode)
+ side.AttributeId, side.AttributeIndex, loginId, languageCode, getterKeyMapValue)
case "true":
sideOut.Value = true
@@ -101,31 +105,43 @@ func ConvertQueryToDataFilter(filters []types.QueryFilter,
case "role":
access, err := cache.GetAccessById(loginId)
if err == nil {
- sideOut.Value = tools.UuidInSlice(side.RoleId.Bytes, access.RoleIds)
+ sideOut.Value = slices.Contains(access.RoleIds, side.RoleId.Bytes)
} else {
sideOut.Value = false
}
+
+ // value
+ case "value":
+ sideOut.Value = side.Value.String
}
return sideOut
}
- for i, filter := range filters {
-
- filterOut := types.DataGetFilter{
- Connector: filter.Connector,
- Operator: filter.Operator,
- Side0: processSide(filter.Side0),
- Side1: processSide(filter.Side1),
- }
- if i == 0 {
- filterOut.Side0.Brackets++
+ // process both base & join filters
+ filtersBase := make([]types.DataGetFilter, 0)
+ filtersJoin := make([]types.DataGetFilter, 0)
+
+ for _, f := range filters {
+ filter := types.DataGetFilter{
+ Connector: f.Connector,
+ Index: f.Index,
+ Operator: f.Operator,
+ Side0: processSide(f.Side0),
+ Side1: processSide(f.Side1),
}
- if i == len(filters)-1 {
- filterOut.Side1.Brackets++
+ if f.Index == 0 {
+ filtersBase = append(filtersBase, filter)
+ } else {
+ filtersJoin = append(filtersJoin, filter)
}
- filtersOut[i] = filterOut
}
- return filtersOut
+
+ // encapsulate base filters
+ if len(filtersBase) > 0 {
+ filtersBase[0].Side0.Brackets++
+ filtersBase[len(filtersBase)-1].Side1.Brackets++
+ }
+ return slices.Concat(filtersBase, filtersJoin)
}
func ConvertQueryToDataJoins(joins []types.QueryJoin) []types.DataGetJoin {
diff --git a/data/data_set.go b/data/data_set.go
index 121c72cc..179503e6 100644
--- a/data/data_set.go
+++ b/data/data_set.go
@@ -9,8 +9,9 @@ import (
"r3/data/data_enc"
"r3/handler"
"r3/schema"
- "r3/tools"
"r3/types"
+ "reflect"
+ "slices"
"sort"
"strings"
@@ -22,8 +23,8 @@ import (
// sets data
// uses indexes (unique integers) to identify specific relations, which can be joined by relationships
// starting with source relation (index:0), joined relations refer to their partner (indexFrom:0, indexFrom:1, ...)
-// if tupel needs to exist for joined relation to refer to, it will be created
-// each index provides tupel ID (0 if new)
+// if tuple needs to exist for joined relation to refer to, it will be created
+// each index provides tuple ID (0 if new)
// each index provides values for its relation attributes or partner relation attributes (relationship attributes from other relation)
func Set_tx(ctx context.Context, tx pgx.Tx, dataSetsByIndex map[int]types.DataSet,
loginId int64) (map[int]int64, error) {
@@ -54,7 +55,7 @@ func Set_tx(ctx context.Context, tx pgx.Tx, dataSetsByIndex map[int]types.DataSe
return indexRecordIds, handler.ErrSchemaUnknownRelation(dataSet.RelationId)
}
- // check write access for tupel creation
+ // check write access for tuple creation
if isNewRecord && !authorizedRelation(loginId, dataSet.RelationId, 2) {
return indexRecordIds, errors.New(handler.ErrUnauthorized)
}
@@ -150,7 +151,7 @@ func Set_tx(ctx context.Context, tx pgx.Tx, dataSetsByIndex map[int]types.DataSe
}
// set data values for specific relation index
-// recursive call, if relationship tupel must be created first
+// recursive call, if relationship tuple must be created first
func setForIndex_tx(ctx context.Context, tx pgx.Tx, index int,
dataSetsByIndex map[int]types.DataSet, indexRecordIds map[int]int64,
indexRecordsCreated map[int]bool, loginId int64) error {
@@ -182,7 +183,7 @@ func setForIndex_tx(ctx context.Context, tx pgx.Tx, index int,
params := make([]string, 0) // value parameters for insert/update statement
values := make([]interface{}, 0) // values for insert/update statements
- // values for relationship tupel IDs are dealt with separately
+ // values for relationship tuple IDs are dealt with separately
type relationshipValue struct {
attributeId uuid.UUID
attributeIdNm pgtype.UUID
@@ -197,10 +198,10 @@ func setForIndex_tx(ctx context.Context, tx pgx.Tx, index int,
}
// process relationship values from other relation
- // (1:n, 1:1 relationships refering to this tupel)
+ // (1:n, 1:1 relationships referring to this tuple)
if attribute.OutsideIn && schema.IsContentRelationship(atr.Content) {
- // store relationship values to apply later (tupel might need to be created first)
+ // store relationship values to apply later (tuple might need to be created first)
shipValues := relationshipValue{
attributeId: attribute.AttributeId,
attributeIdNm: attribute.AttributeIdNm,
@@ -212,6 +213,9 @@ func setForIndex_tx(ctx context.Context, tx pgx.Tx, index int,
shipValues.values = append(shipValues.values, int64(v))
case []interface{}:
for _, v1 := range v {
+ if v1 == nil || reflect.TypeOf(v1).String() != "float64" {
+ return fmt.Errorf("invalid type for relationship value")
+ }
shipValues.values = append(shipValues.values, int64(v1.(float64)))
}
}
@@ -226,7 +230,7 @@ func setForIndex_tx(ctx context.Context, tx pgx.Tx, index int,
continue
}
- // process attribute values for this relation tupel
+ // process attribute values for this relation tuple
values = append(values, attribute.Value)
if isNewRecord {
@@ -278,11 +282,11 @@ func setForIndex_tx(ctx context.Context, tx pgx.Tx, index int,
return handler.ErrSchemaUnknownAttribute(dataSetOther.AttributeId)
}
- // if attribute is on our side, we need to add its value to this tupel
- // if its on the other side, its value will be added when the other tupel is being created
+ // if attribute is on our side, we need to add its value to this tuple
+ // if its on the other side, its value will be added when the other tuple is being created
if relAtrOther.RelationId == dataSet.RelationId {
- // the other relation has a higher index, so its tupel might not exist yet
+ // the other relation has a higher index, so its tuple might not exist yet
if err := setForIndex_tx(ctx, tx, indexOther, dataSetsByIndex,
indexRecordIds, indexRecordsCreated, loginId); err != nil {
@@ -290,7 +294,7 @@ func setForIndex_tx(ctx context.Context, tx pgx.Tx, index int,
}
indexRecordsCreated[indexOther] = true
- // if there is no relationship value available yet, we add it to the tupel
+ // if there is no relationship value available yet, we add it to the tuple
relValueNotSet := true
for _, atr := range dataSet.Attributes {
if atr.AttributeId == relAtrOther.Id {
@@ -302,7 +306,7 @@ func setForIndex_tx(ctx context.Context, tx pgx.Tx, index int,
}
if relValueNotSet {
- // add relationship attribute value for this tupel creation
+ // add relationship attribute value for this tuple creation
values = append(values, indexRecordIds[indexOther])
names = append(names, fmt.Sprintf(`"%s"`, relAtrOther.Name))
params = append(params, fmt.Sprintf(`$%d`, len(values)))
@@ -320,7 +324,7 @@ func setForIndex_tx(ctx context.Context, tx pgx.Tx, index int,
}
// if attribute is on this side, add to this record
- // other relation tupel exists already as its index is lower
+ // other relation tuple exists already as its index is lower
// exclude if both relations are the same, in this case the lower index always wins
if relAtr.RelationId == dataSet.RelationId && dataSet.RelationId != dataSetOther.RelationId {
@@ -373,7 +377,7 @@ func setForIndex_tx(ctx context.Context, tx pgx.Tx, index int,
}
}
- // assign relationship references to this tupel via attributes from partner relations
+ // assign relationship references to this tuple via attributes from partner relations
for _, shipValues := range relationshipValues {
shipAtr, exists := cache.AttributeIdMap[shipValues.attributeId]
@@ -417,7 +421,7 @@ func setForIndex_tx(ctx context.Context, tx pgx.Tx, index int,
if !shipValues.attributeIdNm.Valid {
- // remove old references to this tupel
+ // remove old references to this tuple
if _, err := tx.Exec(ctx, fmt.Sprintf(`
UPDATE "%s"."%s" SET "%s" = NULL
WHERE "%s" = $1
@@ -429,7 +433,7 @@ func setForIndex_tx(ctx context.Context, tx pgx.Tx, index int,
return err
}
- // add new references to this tupel
+ // add new references to this tuple
if _, err := tx.Exec(ctx, fmt.Sprintf(`
UPDATE "%s"."%s" SET "%s" = $1
WHERE "%s" = ANY($2)
@@ -444,7 +448,7 @@ func setForIndex_tx(ctx context.Context, tx pgx.Tx, index int,
return handler.ErrSchemaUnknownAttribute(shipValues.attributeIdNm.Bytes)
}
- // get current references to this tupel
+ // get current references to this tuple
valuesCurr := make([]int64, 0)
if err := tx.QueryRow(ctx, fmt.Sprintf(`
SELECT ARRAY(
@@ -457,9 +461,9 @@ func setForIndex_tx(ctx context.Context, tx pgx.Tx, index int,
return err
}
- // remove old references to this tupel
+ // remove old references to this tuple
for _, value := range valuesCurr {
- if tools.Int64InSlice(value, shipValues.values) {
+ if slices.Contains(shipValues.values, value) {
continue
}
@@ -474,9 +478,9 @@ func setForIndex_tx(ctx context.Context, tx pgx.Tx, index int,
}
}
- // add new references to this tupel
+ // add new references to this tuple
for _, value := range shipValues.values {
- if tools.Int64InSlice(value, valuesCurr) {
+ if slices.Contains(valuesCurr, value) {
continue
}
@@ -509,20 +513,20 @@ func collectCurrentValuesForLog_tx(ctx context.Context, tx pgx.Tx,
dataGet := types.DataGet{
RelationId: relationId,
IndexSource: 0,
- Filters: []types.DataGetFilter{
- types.DataGetFilter{
- Connector: "AND",
- Operator: "=",
- Side0: types.DataGetFilterSide{
- AttributeId: pgtype.UUID{
- Bytes: rel.AttributeIdPk,
- Valid: true,
- },
- },
- Side1: types.DataGetFilterSide{
- Value: recordId,
+ Filters: []types.DataGetFilter{{
+ Connector: "AND",
+ Index: 0,
+ Operator: "=",
+ Side0: types.DataGetFilterSide{
+ AttributeId: pgtype.UUID{
+ Bytes: rel.AttributeIdPk,
+ Valid: true,
},
},
+ Side1: types.DataGetFilterSide{
+ Value: recordId,
+ },
+ },
},
}
@@ -538,7 +542,7 @@ func collectCurrentValuesForLog_tx(ctx context.Context, tx pgx.Tx,
// special case: file attribute
// no need to lookup current values as file attribute values already only include changes
- ReturnNull: tools.IntInSlice(i, fileAttributeIndexes),
+ ReturnNull: slices.Contains(fileAttributeIndexes, i),
})
}
diff --git a/db/check/check.go b/db/check/check.go
index 84bbbe73..71f28f71 100644
--- a/db/check/check.go
+++ b/db/check/check.go
@@ -11,7 +11,7 @@ import (
func DbIdentifier(input string) error {
if input == "" {
- return handler.CreateErrCode("APP", handler.ErrCodeAppNameEmpty)
+ return handler.CreateErrCode(handler.ErrContextApp, handler.ErrCodeAppNameEmpty)
}
// must start with [a-z], followed by [a-z0-9\_], max. 60 chars (max. identifier size in PostgreSQL: 63)
@@ -21,7 +21,7 @@ func DbIdentifier(input string) error {
return err
}
if input != rex.FindString(input) {
- return handler.CreateErrCode("APP", handler.ErrCodeAppNameInvalid)
+ return handler.CreateErrCode(handler.ErrContextApp, handler.ErrCodeAppNameInvalid)
}
return nil
}
diff --git a/db/db.go b/db/db.go
index f9386039..c4b0e338 100644
--- a/db/db.go
+++ b/db/db.go
@@ -7,6 +7,7 @@ import (
"net/url"
"r3/tools"
"r3/types"
+ "strconv"
"time"
pgxuuid "github.com/jackc/pgx-gofrs-uuid"
@@ -14,8 +15,18 @@ import (
"github.com/jackc/pgx/v5/pgxpool"
)
-var Ctx = context.TODO()
-var Pool *pgxpool.Pool
+var (
+ Pool *pgxpool.Pool
+
+ // default context timeouts
+ CtxDefTimeoutDbTask = 300 * time.Second // heavy DB operations (init/upgrade/relation retention cleanup)
+ CtxDefTimeoutLogWrite = 30 * time.Second // writing to database log
+ CtxDefTimeoutPgFunc = 240 * time.Second // executing plsql functions, to be replaced by config option
+ CtxDefTimeoutShutdown = 10 * time.Second // shutting down system
+ CtxDefTimeoutSysTask = 30 * time.Second // executing system tasks
+ CtxDefTimeoutSysStart = 300 * time.Second // executing system startup tasks
+ CtxDefTimeoutTransfer = 600 * time.Second // executing module transfers, to be replaced by config option
+)
// attempts to open a database connection
// repeat attempts until successful or predefined time limit is reached
@@ -31,6 +42,7 @@ func OpenWait(timeoutSeconds int64, config types.FileTypeDb) error {
}
time.Sleep(time.Millisecond * 500)
}
+ Pool = nil
return fmt.Errorf("timeout reached, last error: %s", err)
}
@@ -50,7 +62,12 @@ func Open(config types.FileTypeDb) error {
if err != nil {
return err
}
-
+ if config.ConnsMax != 0 {
+ poolConfig.MaxConns = config.ConnsMax
+ }
+ if config.ConnsMin != 0 {
+ poolConfig.MinConns = config.ConnsMin
+ }
if config.Ssl {
poolConfig.ConnConfig.TLSConfig = &tls.Config{
InsecureSkipVerify: config.SslSkipVerify,
@@ -70,7 +87,13 @@ func Open(config types.FileTypeDb) error {
return Pool.Ping(context.Background())
}
+// set transaction config parameters
+// these are used by system functions, such as instance.get_login_id()
+func SetSessionConfig_tx(ctx context.Context, tx pgx.Tx, loginId int64) error {
+ _, err := tx.Exec(ctx, `SELECT SET_CONFIG('r3.login_id',$1,TRUE)`, strconv.FormatInt(loginId, 10))
+ return err
+}
+
func Close() {
Pool.Close()
- Pool = nil
}
diff --git a/db/embedded/embedded.go b/db/embedded/embedded.go
index 0a174fd6..724d5c96 100644
--- a/db/embedded/embedded.go
+++ b/db/embedded/embedded.go
@@ -1,22 +1,12 @@
/*
- controls embedded postgres database via pg_ctl
- sets locale for messages (LC_MESSAGES) for parsing call outputs
+controls embedded postgres database via pg_ctl
+sets locale for messages (LC_MESSAGES) for parsing call outputs
*/
package embedded
import (
- "bufio"
- "context"
- "errors"
- "fmt"
- "os"
- "os/exec"
"path/filepath"
"r3/config"
- "r3/log"
- "r3/tools"
- "strings"
- "time"
)
var (
@@ -40,141 +30,3 @@ func SetPaths() {
dbBinCtl = filepath.Join(dbBin, "pg_ctl")
dbData = config.File.Paths.EmbeddedDbData
}
-
-func Start() error {
-
- // check for existing embedded database path
- exists, err := tools.Exists(dbData)
- if err != nil {
- return err
- }
- if !exists {
-
- // get database from template
- if err := tools.FileMove(strings.Replace(dbData, "database", "database_template", 1),
- dbData, false); err != nil {
-
- return err
- }
- }
-
- // check embedded database state
- state, err := status()
- if err != nil {
- return err
- }
-
- if state {
- return fmt.Errorf("database already running, another instance is likely active")
- }
- _, err = execWaitFor(dbBinCtl, []string{"start", "-D", dbData,
- fmt.Sprintf(`-o "-p %d"`, config.File.Db.Port)}, []string{msgStarted}, 10)
-
- return err
-}
-
-func Stop() error {
-
- state, err := status()
- if err != nil {
- return err
- }
-
- if !state {
- log.Info("server", "embedded database already stopped")
- return nil
- }
-
- _, err = execWaitFor(dbBinCtl, []string{"stop", "-D", dbData}, []string{msgStopped}, 10)
- return err
-}
-
-func status() (bool, error) {
-
- foundLine, err := execWaitFor(dbBinCtl, []string{"status", "-D", dbData},
- []string{msgState0, msgState1}, 5)
-
- if err != nil {
- return false, err
- }
-
- if strings.Contains(foundLine, msgState1) {
- return true, nil
- }
- return false, nil
-}
-
-// executes call and waits for specified lines to return
-// will return automatically after timeout
-func execWaitFor(call string, args []string, waitFor []string, waitTime int) (string, error) {
-
- ctx, _ := context.WithTimeout(context.Background(), time.Duration(waitTime)*time.Second)
-
- cmd := exec.CommandContext(ctx, call, args...)
- tools.CmdAddSysProgAttrs(cmd)
- cmd.Env = append(os.Environ(), fmt.Sprintf("LC_MESSAGES=%s", locale))
-
- stdout, err := cmd.StdoutPipe()
- if err != nil {
- return "", err
- }
-
- done := make(chan bool)
- var doneErr error = nil
- var doneLine string = ""
-
- // react to call timeout
- go func() {
- for {
- <-ctx.Done()
- doneErr = errors.New("timeout reached")
- done <- true
- return
- }
- }()
-
- // react to new lines from standard output
- go func() {
- if err := cmd.Start(); err != nil {
- doneErr = err
- done <- true
- return
- }
-
- log := []string{}
- buf := bufio.NewReader(stdout)
- for {
- line, _, err := buf.ReadLine()
- if err != nil {
- doneErr = err
- break
- }
- log = append(log, string(line))
-
- // success if expected lines turned up
- for _, waitLine := range waitFor {
-
- if strings.Contains(string(line), waitLine) {
-
- doneLine = waitLine
- done <- true
- return
- }
- }
- }
-
- if len(log) == 0 {
- // nothing turned up
- doneErr = errors.New("output is empty")
- done <- true
- return
- }
-
- // expected lines did not turn up
- doneErr = fmt.Errorf("unexpected output, got: %s", strings.Join(log, ","))
- done <- true
- }()
-
- <-done
- return doneLine, doneErr
-}
diff --git a/db/embedded/embedded_linux.go b/db/embedded/embedded_linux.go
new file mode 100644
index 00000000..595c0858
--- /dev/null
+++ b/db/embedded/embedded_linux.go
@@ -0,0 +1,12 @@
+//go:build linux || darwin
+
+package embedded
+
+import "fmt"
+
+func Start() error {
+ return fmt.Errorf("embedded database is only supported on Windows")
+}
+func Stop() error {
+ return fmt.Errorf("embedded database is only supported on Windows")
+}
diff --git a/db/embedded/embedded_windows.go b/db/embedded/embedded_windows.go
new file mode 100644
index 00000000..f5ebfba5
--- /dev/null
+++ b/db/embedded/embedded_windows.go
@@ -0,0 +1,158 @@
+//go:build windows
+
+package embedded
+
+import (
+ "bufio"
+ "context"
+ "errors"
+ "fmt"
+ "io"
+ "os"
+ "os/exec"
+ "r3/config"
+ "r3/log"
+ "r3/tools"
+ "strings"
+ "syscall"
+ "time"
+)
+
+func Start() error {
+
+ // check for existing embedded database path
+ exists, err := tools.Exists(dbData)
+ if err != nil {
+ return err
+ }
+ if !exists {
+
+ // get database from template
+ if err := tools.FileMove(strings.Replace(dbData, "database", "database_template", 1),
+ dbData, false); err != nil {
+
+ return err
+ }
+ }
+
+ // check embedded database state
+ state, err := status()
+ if err != nil {
+ return err
+ }
+
+ if state {
+ return fmt.Errorf("database already running, another instance is likely active")
+ }
+ _, err = execWaitFor(dbBinCtl, []string{"start", "-D", dbData,
+ fmt.Sprintf(`-o "-p %d"`, config.File.Db.Port)}, []string{msgStarted}, 10)
+
+ return err
+}
+
+func Stop() error {
+
+ state, err := status()
+ if err != nil {
+ return err
+ }
+
+ if !state {
+ log.Info("server", "embedded database already stopped")
+ return nil
+ }
+
+ _, err = execWaitFor(dbBinCtl, []string{"stop", "-D", dbData}, []string{msgStopped}, 10)
+ return err
+}
+
+func status() (bool, error) {
+
+ foundLine, err := execWaitFor(dbBinCtl, []string{"status", "-D", dbData},
+ []string{msgState0, msgState1}, 5)
+
+ if err != nil {
+ return false, err
+ }
+ // returns true if DB server is running
+ return strings.Contains(foundLine, msgState1), nil
+}
+
+// executes call and waits for specified lines to return
+// will return automatically after timeout
+func execWaitFor(call string, args []string, waitFor []string, waitTime int) (string, error) {
+
+ ctx, _ := context.WithTimeout(context.Background(), time.Duration(waitTime)*time.Second)
+ cmd := exec.CommandContext(ctx, call, args...)
+ tools.CmdAddSysProgAttrs(cmd)
+ cmd.Env = append(os.Environ(), fmt.Sprintf("LC_MESSAGES=%s", locale))
+
+ // create as separate process for clean shutdown, otherwise child progs are killed immediately on SIGINT
+ cmd.SysProcAttr = &syscall.SysProcAttr{
+ CreationFlags: syscall.CREATE_NEW_PROCESS_GROUP,
+ }
+
+ stdout, err := cmd.StdoutPipe()
+ if err != nil {
+ return "", err
+ }
+
+ type chanReturnType struct {
+ err error
+ line string
+ }
+ chanReturn := make(chan chanReturnType)
+
+ // react to call timeout
+ go func() {
+ for {
+ <-ctx.Done()
+ chanReturn <- chanReturnType{err: errors.New("timeout reached")}
+ return
+ }
+ }()
+
+ // react to new lines from standard output
+ go func() {
+ if err := cmd.Start(); err != nil {
+ chanReturn <- chanReturnType{err: err}
+ return
+ }
+
+ buf := bufio.NewReader(stdout)
+ bufLines := []string{}
+ for {
+ line, _, err := buf.ReadLine()
+ if err != nil {
+ if err != io.EOF {
+ // log error if not end-of-file
+ log.Error("server", "failed to read from std out", err)
+ }
+ break
+ }
+ bufLines = append(bufLines, string(line))
+
+ // success if expected lines turned up
+ for _, waitLine := range waitFor {
+ if strings.Contains(string(line), waitLine) {
+ chanReturn <- chanReturnType{
+ err: nil,
+ line: waitLine,
+ }
+ return
+ }
+ }
+ }
+
+ if len(bufLines) == 0 {
+ // nothing turned up
+ chanReturn <- chanReturnType{err: errors.New("output is empty")}
+ } else {
+ // expected lines did not turn up
+ chanReturn <- chanReturnType{err: fmt.Errorf("unexpected output, got: %s", strings.Join(bufLines, ","))}
+ }
+ }()
+
+ res := <-chanReturn
+ return res.line, res.err
+}
diff --git a/db/initialize/initialize.go b/db/initialize/initialize.go
index d42c18e2..f6c6b721 100644
--- a/db/initialize/initialize.go
+++ b/db/initialize/initialize.go
@@ -1,7 +1,9 @@
package initialize
import (
+ "context"
"fmt"
+ "r3/bruteforce"
"r3/config"
"r3/db"
"r3/db/upgrade"
@@ -12,9 +14,11 @@ import (
)
func PrepareDbIfNew() error {
+ ctx, ctxCanc := context.WithTimeout(context.Background(), db.CtxDefTimeoutDbTask)
+ defer ctxCanc()
var exists bool
- if err := db.Pool.QueryRow(db.Ctx, `
+ if err := db.Pool.QueryRow(ctx, `
SELECT exists(
SELECT FROM pg_tables
WHERE schemaname = 'instance'
@@ -27,29 +31,29 @@ func PrepareDbIfNew() error {
return nil
}
- tx, err := db.Pool.Begin(db.Ctx)
+ tx, err := db.Pool.Begin(ctx)
if err != nil {
return err
}
- defer tx.Rollback(db.Ctx)
+ defer tx.Rollback(ctx)
- if err := initAppSchema_tx(tx); err != nil {
+ if err := initAppSchema_tx(ctx, tx); err != nil {
return err
}
- if err := initInstanceValues_tx(tx); err != nil {
+ if err := initInstanceValues_tx(ctx, tx); err != nil {
return err
}
// replace database password for embedded database
if config.File.Db.Embedded {
- if err := renewDbUserPw_tx(tx); err != nil {
+ if err := renewDbUserPw_tx(ctx, tx); err != nil {
return err
}
}
// commit changes
- if err := tx.Commit(db.Ctx); err != nil {
+ if err := tx.Commit(ctx); err != nil {
return err
}
@@ -65,23 +69,23 @@ func PrepareDbIfNew() error {
if err := config.LoadFromDb(); err != nil {
return err
}
+ bruteforce.SetConfig()
+ config.ActivateLicense()
+ config.SetLogLevels()
// before doing any more work, upgrade DB if necessary
if err := upgrade.RunIfRequired(); err != nil {
return err
}
- // create initial login
- if err := login.CreateAdmin("admin", "admin"); err != nil {
- return err
- }
- return nil
+ // create initial login last, in case database upgrade is required beforehand
+ return login.CreateAdmin("admin", "admin")
}
-func renewDbUserPw_tx(tx pgx.Tx) error {
+func renewDbUserPw_tx(ctx context.Context, tx pgx.Tx) error {
newPass := tools.RandStringRunes(48)
- _, err := tx.Exec(db.Ctx, fmt.Sprintf(`ALTER USER %s WITH PASSWORD '%s'`,
+ _, err := tx.Exec(ctx, fmt.Sprintf(`ALTER USER %s WITH PASSWORD '%s'`,
config.File.Db.User, newPass))
if err != nil {
@@ -96,73 +100,88 @@ func renewDbUserPw_tx(tx pgx.Tx) error {
return nil
}
-// for later inits
-/*
- create default login template
- INSERT INTO instance.login_template (name)
- VALUES ('GLOBAL');
-
- INSERT INTO instance.login_setting (login_template_id, language_code, date_format,
- sunday_first_dow, font_size, borders_all, borders_corner, page_limit,
- header_captions, spacing, dark, compact, hint_update_version,
- mobile_scroll_form, warn_unsaved, menu_colored, pattern, font_family,
- tab_remember, field_clean)
- SELECT id, 'en_us', 'Y-m-d', true, 100, false, 'keep', 2000, true, 3, false,
- true, 0, true, true, false, 'bubbles', 'helvetica', true, true
- FROM instance.login_template
- WHERE name = 'GLOBAL';
-*/
-
-// instance initalized to 3.0
-func initInstanceValues_tx(tx pgx.Tx) error {
+// instance initalized to 3.10
+func initInstanceValues_tx(ctx context.Context, tx pgx.Tx) error {
appName, appNameShort := config.GetAppName()
- _, err := tx.Exec(db.Ctx, fmt.Sprintf(`
+ _, err := tx.Exec(ctx, fmt.Sprintf(`
+ -- default login template
+ INSERT INTO instance.login_template (name) VALUES ('GLOBAL');
+
+ INSERT INTO instance.login_setting (
+ login_template_id, dark, date_format, header_captions, hint_update_version, language_code, mobile_scroll_form, font_family,
+ font_size, pattern, spacing, sunday_first_dow, warn_unsaved, tab_remember, borders_squared, color_classic_mode, color_header,
+ color_menu, color_header_single, header_modules, list_colored, list_spaced, number_sep_decimal, number_sep_thousand,
+ bool_as_icon, form_actions_align, shadows_inputs)
+ VALUES (
+ (
+ SELECT id
+ FROM instance.login_template
+ WHERE name = 'GLOBAL'
+ ), false, 'Y-m-d', true, 0, 'en_us', true, 'helvetica',
+ 100, 'bubbles', 3, true, true, true, false, false, NULL,
+ NULL, false, true, true, false, '.', ',',
+ true, 'center', true);
+
-- config
INSERT INTO instance.config (name,value) VALUES
+ ('adminMails',''),
('appName','%s'),
('appNameShort','%s'),
- ('backupDir',''),
+ ('backupCountDaily','7'),
+ ('backupCountMonthly','3'),
+ ('backupCountWeekly','4'),
('backupDaily','0'),
+ ('backupDir',''),
('backupMonthly','0'),
('backupWeekly','0'),
- ('backupCountDaily','7'),
- ('backupCountWeekly','4'),
- ('backupCountMonthly','3'),
('bruteforceAttempts','50'),
('bruteforceProtection','1'),
('builderMode','0'),
('clusterNodeMissingAfter','180'),
('companyColorHeader',''),
('companyColorLogin',''),
+ ('companyLoginImage',''),
('companyLogo',''),
('companyLogoUrl',''),
('companyName',''),
('companyWelcome',''),
+ ('css',''),
('dbTimeoutCsv','120'),
('dbTimeoutDataRest','60'),
('dbTimeoutDataWs','300'),
('dbTimeoutIcs','30'),
- ('dbVersionCut','3.0'),
- ('defaultLanguageCode','en_us'),
+ ('dbVersionCut','3.10'),
+ ('filesKeepDaysDeleted','90'),
+ ('fileVersionsKeepCount','30'),
+ ('fileVersionsKeepDays','90'),
+ ('iconPwa1',''),
+ ('iconPwa2',''),
('icsDaysPost','365'),
('icsDaysPre','365'),
('icsDownload','1'),
+ ('imagerThumbWidth','300'),
('instanceId',''),
('licenseFile',''),
- ('logApplication','2'),
+ ('logApi','2'),
('logBackup','2'),
('logCache','2'),
('logCluster','2'),
('logCsv','2'),
+ ('logImager','2'),
+ ('loginBackgrounds','[2,5,6,9,11]'),
('logLdap','2'),
('logMail','2'),
+ ('logModule','2'),
('logScheduler','2'),
('logServer','2'),
- ('logTransfer','2'),
('logsKeepDays','90'),
+ ('logTransfer','2'),
+ ('logWebsocket','2'),
+ ('mailTrafficKeepDays','90'),
('productionMode','0'),
+ ('proxyUrl',''),
('publicHostName','localhost'),
('pwForceDigit','1'),
('pwForceLower','1'),
@@ -176,8 +195,12 @@ func initInstanceValues_tx(tx pgx.Tx) error {
('repoSkipVerify','0'),
('repoUrl','https://store.rei3.de'),
('repoUser','repo_public'),
- ('schemaTimestamp','0'),
+ ('systemMsgDate0','0'),
+ ('systemMsgDate1','0'),
+ ('systemMsgMaintenance','0'),
+ ('systemMsgText',''),
('tokenExpiryHours','168'),
+ ('tokenKeepEnable','1'),
('tokenSecret',''),
('updateCheckUrl','https://rei3.de/version'),
('updateCheckVersion','');
@@ -186,55 +209,65 @@ func initInstanceValues_tx(tx pgx.Tx) error {
INSERT INTO instance.task
(name,interval_seconds,cluster_master_only,embedded_only,active,active_only)
VALUES
+ ('adminMails',86400,true,false,true,false),
+ ('backupRun',3600,true,false,true,false),
('cleanupBruteforce',86400,false,false,true,false),
('cleanupDataLogs',86400,true,false,true,false),
+ ('cleanupFiles',86400,true,false,true,false),
('cleanupLogs',86400,true,false,true,false),
+ ('cleanupMailTraffic',604800,true,false,true,false),
('cleanupTempDir',86400,true,false,true,false),
- ('cleanupFiles',86400,true,false,true,false),
('clusterCheckIn',60,false,false,true,true),
('clusterProcessEvents',5,false,false,true,true),
- ('embeddedBackup',3600,true,true,true,false),
+ ('dbOptimize',2580000,true,false,true,false),
('httpCertRenew',86400,false,false,true,false),
('importLdapLogins',900,true,false,true,false),
('mailAttach',30,true,false,true,false),
('mailRetrieve',60,true,false,true,false),
('mailSend',10,true,false,true,false),
('repoCheck',86400,true,false,true,false),
+ ('restExecute',15,true,false,true,false),
+ ('systemMsgMaintenance',30,true,false,true,true),
('updateCheck',86400,true,false,true,false);
INSERT INTO instance.schedule
(task_name,date_attempt,date_success)
VALUES
+ ('adminMails',0,0),
+ ('backupRun',0,0),
('cleanupBruteforce',0,0),
('cleanupDataLogs',0,0),
+ ('cleanupFiles',0,0),
('cleanupLogs',0,0),
+ ('cleanupMailTraffic',0,0),
('cleanupTempDir',0,0),
- ('cleanupFiles',0,0),
('clusterCheckIn',0,0),
('clusterProcessEvents',0,0),
- ('embeddedBackup',0,0),
+ ('dbOptimize',0,0),
('httpCertRenew',0,0),
('importLdapLogins',0,0),
('mailAttach',0,0),
('mailRetrieve',0,0),
('mailSend',0,0),
('repoCheck',0,0),
+ ('restExecute',0,0),
+ ('systemMsgMaintenance',0,0),
('updateCheck',0,0);
`, appName, appNameShort))
return err
}
-// app initalized to 3.0
-func initAppSchema_tx(tx pgx.Tx) error {
- _, err := tx.Exec(db.Ctx, `
+// app initialized to 3.10
+func initAppSchema_tx(ctx context.Context, tx pgx.Tx) error {
+ _, err := tx.Exec(ctx, `
--
-- PostgreSQL database dump
--
-- Dumped from database version 13.7
--- Dumped by pg_dump version 14.3
+-- Dumped by pg_dump version 17.1
--- Started on 2022-07-12 12:35:48
+-- Started on 2025-02-05 11:27:36
SET statement_timeout = 0;
SET lock_timeout = 0;
@@ -256,7 +289,7 @@ CREATE SCHEMA app;
--
--- TOC entry 5 (class 2615 OID 16388)
+-- TOC entry 7 (class 2615 OID 16388)
-- Name: instance; Type: SCHEMA; Schema: -; Owner: -
--
@@ -264,7 +297,7 @@ CREATE SCHEMA instance;
--
--- TOC entry 9 (class 2615 OID 18448)
+-- TOC entry 8 (class 2615 OID 16389)
-- Name: instance_cluster; Type: SCHEMA; Schema: -; Owner: -
--
@@ -272,7 +305,7 @@ CREATE SCHEMA instance_cluster;
--
--- TOC entry 4 (class 2615 OID 18338)
+-- TOC entry 9 (class 2615 OID 16390)
-- Name: instance_e2ee; Type: SCHEMA; Schema: -; Owner: -
--
@@ -280,430 +313,667 @@ CREATE SCHEMA instance_e2ee;
--
--- TOC entry 726 (class 1247 OID 16390)
+-- TOC entry 10 (class 2615 OID 18390)
+-- Name: instance_file; Type: SCHEMA; Schema: -; Owner: -
+--
+
+CREATE SCHEMA instance_file;
+
+
+--
+-- TOC entry 4 (class 2615 OID 2200)
+-- Name: public; Type: SCHEMA; Schema: -; Owner: -
+--
+
+-- *not* creating schema, since initdb creates it
+
+
+--
+-- TOC entry 775 (class 1247 OID 16392)
-- Name: aggregator; Type: TYPE; Schema: app; Owner: -
--
CREATE TYPE app.aggregator AS ENUM (
- 'avg',
- 'count',
- 'list',
- 'max',
- 'min',
- 'sum',
- 'record',
- 'array',
- 'json'
+ 'avg',
+ 'count',
+ 'list',
+ 'max',
+ 'min',
+ 'sum',
+ 'record',
+ 'array',
+ 'json'
);
--
--- TOC entry 729 (class 1247 OID 16406)
+-- TOC entry 778 (class 1247 OID 16412)
-- Name: attribute_content; Type: TYPE; Schema: app; Owner: -
--
CREATE TYPE app.attribute_content AS ENUM (
- 'integer',
- 'bigint',
- 'numeric',
- 'real',
- 'double precision',
- 'varchar',
- 'text',
- 'boolean',
- '1:1',
- 'n:1',
- 'files'
+ 'integer',
+ 'bigint',
+ 'numeric',
+ 'real',
+ 'double precision',
+ 'varchar',
+ 'text',
+ 'boolean',
+ '1:1',
+ 'n:1',
+ 'files',
+ 'uuid',
+ 'regconfig',
+ '1:n'
+);
+
+
+--
+-- TOC entry 1129 (class 1247 OID 18590)
+-- Name: attribute_content_use; Type: TYPE; Schema: app; Owner: -
+--
+
+CREATE TYPE app.attribute_content_use AS ENUM (
+ 'default',
+ 'textarea',
+ 'richtext',
+ 'date',
+ 'datetime',
+ 'time',
+ 'color',
+ 'iframe',
+ 'drawing',
+ 'barcode'
);
--
--- TOC entry 732 (class 1247 OID 16430)
+-- TOC entry 781 (class 1247 OID 16436)
-- Name: attribute_fk_actions; Type: TYPE; Schema: app; Owner: -
--
CREATE TYPE app.attribute_fk_actions AS ENUM (
- 'NO ACTION',
- 'RESTRICT',
- 'CASCADE',
- 'SET NULL',
- 'SET DEFAULT'
+ 'NO ACTION',
+ 'RESTRICT',
+ 'CASCADE',
+ 'SET NULL',
+ 'SET DEFAULT'
);
--
--- TOC entry 735 (class 1247 OID 16442)
+-- TOC entry 1121 (class 1247 OID 18507)
-- Name: caption_content; Type: TYPE; Schema: app; Owner: -
--
CREATE TYPE app.caption_content AS ENUM (
- 'attributeTitle',
- 'columnTitle',
- 'fieldHelp',
- 'fieldTitle',
- 'formHelp',
- 'formTitle',
- 'menuTitle',
- 'moduleHelp',
- 'moduleTitle',
- 'queryChoiceTitle',
- 'roleDesc',
- 'roleTitle',
- 'pgFunctionTitle',
- 'pgFunctionDesc',
- 'loginFormTitle',
- 'jsFunctionTitle',
- 'jsFunctionDesc'
-);
-
-
---
--- TOC entry 1046 (class 1247 OID 18374)
+ 'articleBody',
+ 'articleTitle',
+ 'attributeTitle',
+ 'columnTitle',
+ 'fieldHelp',
+ 'fieldTitle',
+ 'formTitle',
+ 'menuTitle',
+ 'moduleTitle',
+ 'queryChoiceTitle',
+ 'roleDesc',
+ 'roleTitle',
+ 'pgFunctionTitle',
+ 'pgFunctionDesc',
+ 'loginFormTitle',
+ 'jsFunctionTitle',
+ 'jsFunctionDesc',
+ 'tabTitle',
+ 'widgetTitle',
+ 'formActionTitle',
+ 'clientEventTitle',
+ 'menuTabTitle'
+);
+
+
+--
+-- TOC entry 1216 (class 1247 OID 19252)
+-- Name: client_event_action; Type: TYPE; Schema: app; Owner: -
+--
+
+CREATE TYPE app.client_event_action AS ENUM (
+ 'callJsFunction',
+ 'callPgFunction'
+);
+
+
+--
+-- TOC entry 1219 (class 1247 OID 19258)
+-- Name: client_event_argument; Type: TYPE; Schema: app; Owner: -
+--
+
+CREATE TYPE app.client_event_argument AS ENUM (
+ 'clipboard',
+ 'hostname',
+ 'username',
+ 'windowTitle'
+);
+
+
+--
+-- TOC entry 1222 (class 1247 OID 19268)
+-- Name: client_event_event; Type: TYPE; Schema: app; Owner: -
+--
+
+CREATE TYPE app.client_event_event AS ENUM (
+ 'onConnect',
+ 'onDisconnect',
+ 'onHotkey'
+);
+
+
+--
+-- TOC entry 1225 (class 1247 OID 19276)
+-- Name: client_event_hotkey_modifier; Type: TYPE; Schema: app; Owner: -
+--
+
+CREATE TYPE app.client_event_hotkey_modifier AS ENUM (
+ 'ALT',
+ 'CMD',
+ 'CTRL',
+ 'SHIFT'
+);
+
+
+--
+-- TOC entry 784 (class 1247 OID 16484)
-- Name: collection_consumer_content; Type: TYPE; Schema: app; Owner: -
--
CREATE TYPE app.collection_consumer_content AS ENUM (
- 'fieldDataDefault',
- 'fieldFilterSelector',
- 'headerDisplay',
- 'menuDisplay'
+ 'fieldDataDefault',
+ 'fieldFilterSelector',
+ 'headerDisplay',
+ 'menuDisplay',
+ 'widgetDisplay'
+);
+
+
+--
+-- TOC entry 1269 (class 1247 OID 19548)
+-- Name: collection_consumer_flag; Type: TYPE; Schema: app; Owner: -
+--
+
+CREATE TYPE app.collection_consumer_flag AS ENUM (
+ 'multiValue',
+ 'noDisplayEmpty',
+ 'showRowCount'
+);
+
+
+--
+-- TOC entry 1164 (class 1247 OID 18828)
+-- Name: column_style; Type: TYPE; Schema: app; Owner: -
+--
+
+CREATE TYPE app.column_style AS ENUM (
+ 'bold',
+ 'italic',
+ 'alignEnd',
+ 'alignMid',
+ 'clipboard',
+ 'vertical',
+ 'wrap',
+ 'monospace',
+ 'previewLarge',
+ 'boolAtrIcon'
);
--
--- TOC entry 738 (class 1247 OID 16468)
+-- TOC entry 787 (class 1247 OID 16494)
-- Name: condition_connector; Type: TYPE; Schema: app; Owner: -
--
CREATE TYPE app.condition_connector AS ENUM (
- 'AND',
- 'OR'
+ 'AND',
+ 'OR'
);
--
--- TOC entry 741 (class 1247 OID 16474)
+-- TOC entry 790 (class 1247 OID 16500)
-- Name: condition_operator; Type: TYPE; Schema: app; Owner: -
--
CREATE TYPE app.condition_operator AS ENUM (
- '=',
- '<>',
- '<',
- '>',
- '<=',
- '>=',
- 'IS NULL',
- 'IS NOT NULL',
- 'LIKE',
- 'ILIKE',
- 'NOT LIKE',
- 'NOT ILIKE',
- '= ANY',
- '<> ALL',
- '@>',
- '<@',
- '&&'
-);
-
-
---
--- TOC entry 744 (class 1247 OID 16504)
+ '=',
+ '<>',
+ '<',
+ '>',
+ '<=',
+ '>=',
+ 'IS NULL',
+ 'IS NOT NULL',
+ 'LIKE',
+ 'ILIKE',
+ 'NOT LIKE',
+ 'NOT ILIKE',
+ '= ANY',
+ '<> ALL',
+ '@>',
+ '<@',
+ '&&',
+ '~',
+ '~*',
+ '!~',
+ '!~*'
+);
+
+
+--
+-- TOC entry 1132 (class 1247 OID 18607)
-- Name: data_display; Type: TYPE; Schema: app; Owner: -
--
CREATE TYPE app.data_display AS ENUM (
- 'color',
- 'date',
- 'datetime',
- 'default',
- 'email',
- 'gallery',
- 'hidden',
- 'login',
- 'phone',
- 'richtext',
- 'slider',
- 'textarea',
- 'time',
- 'url',
- 'password'
-);
-
-
---
--- TOC entry 747 (class 1247 OID 16534)
+ 'default',
+ 'email',
+ 'gallery',
+ 'hidden',
+ 'login',
+ 'password',
+ 'phone',
+ 'slider',
+ 'url',
+ 'rating'
+);
+
+
+--
+-- TOC entry 793 (class 1247 OID 16568)
-- Name: field_calendar_gantt_steps; Type: TYPE; Schema: app; Owner: -
--
CREATE TYPE app.field_calendar_gantt_steps AS ENUM (
- 'days',
- 'hours'
+ 'days',
+ 'hours'
);
--
--- TOC entry 750 (class 1247 OID 16540)
+-- TOC entry 796 (class 1247 OID 16574)
-- Name: field_container_align_content; Type: TYPE; Schema: app; Owner: -
--
CREATE TYPE app.field_container_align_content AS ENUM (
- 'center',
- 'flex-end',
- 'flex-start',
- 'space-between',
- 'space-around',
- 'stretch'
+ 'center',
+ 'flex-end',
+ 'flex-start',
+ 'space-between',
+ 'space-around',
+ 'stretch',
+ 'space-evenly'
);
--
--- TOC entry 753 (class 1247 OID 16554)
+-- TOC entry 799 (class 1247 OID 16588)
-- Name: field_container_align_items; Type: TYPE; Schema: app; Owner: -
--
CREATE TYPE app.field_container_align_items AS ENUM (
- 'baseline',
- 'center',
- 'flex-end',
- 'flex-start',
- 'stretch'
+ 'baseline',
+ 'center',
+ 'flex-end',
+ 'flex-start',
+ 'stretch'
);
--
--- TOC entry 756 (class 1247 OID 16566)
+-- TOC entry 802 (class 1247 OID 16600)
-- Name: field_container_direction; Type: TYPE; Schema: app; Owner: -
--
CREATE TYPE app.field_container_direction AS ENUM (
- 'column',
- 'row'
+ 'column',
+ 'row'
);
--
--- TOC entry 759 (class 1247 OID 16572)
+-- TOC entry 805 (class 1247 OID 16606)
-- Name: field_container_justify_content; Type: TYPE; Schema: app; Owner: -
--
CREATE TYPE app.field_container_justify_content AS ENUM (
- 'flex-start',
- 'flex-end',
- 'center',
- 'space-between',
- 'space-around',
- 'space-evenly'
+ 'flex-start',
+ 'flex-end',
+ 'center',
+ 'space-between',
+ 'space-around',
+ 'space-evenly'
);
--
--- TOC entry 762 (class 1247 OID 16586)
+-- TOC entry 808 (class 1247 OID 16620)
-- Name: field_content; Type: TYPE; Schema: app; Owner: -
--
CREATE TYPE app.field_content AS ENUM (
- 'button',
- 'calendar',
- 'container',
- 'data',
- 'header',
- 'list',
- 'chart'
+ 'button',
+ 'calendar',
+ 'container',
+ 'data',
+ 'header',
+ 'list',
+ 'chart',
+ 'tabs',
+ 'kanban',
+ 'variable'
);
--
--- TOC entry 765 (class 1247 OID 16600)
--- Name: field_list_layout; Type: TYPE; Schema: app; Owner: -
+-- TOC entry 1265 (class 1247 OID 19537)
+-- Name: field_flag; Type: TYPE; Schema: app; Owner: -
--
-CREATE TYPE app.field_list_layout AS ENUM (
- 'cards',
- 'table'
+CREATE TYPE app.field_flag AS ENUM (
+ 'alignEnd',
+ 'hideInputs',
+ 'monospace'
);
--
--- TOC entry 768 (class 1247 OID 16606)
--- Name: field_state; Type: TYPE; Schema: app; Owner: -
+-- TOC entry 811 (class 1247 OID 16636)
+-- Name: field_list_layout; Type: TYPE; Schema: app; Owner: -
--
-CREATE TYPE app.field_state AS ENUM (
- 'default',
- 'hidden',
- 'readonly',
- 'required',
- 'optional'
+CREATE TYPE app.field_list_layout AS ENUM (
+ 'cards',
+ 'table'
);
--
--- TOC entry 1035 (class 1247 OID 16638)
+-- TOC entry 814 (class 1247 OID 16654)
-- Name: filter_side_content; Type: TYPE; Schema: app; Owner: -
--
CREATE TYPE app.filter_side_content AS ENUM (
- 'attribute',
- 'field',
- 'javascript',
- 'languageCode',
- 'login',
- 'record',
- 'recordNew',
- 'role',
- 'subQuery',
- 'true',
- 'value',
- 'preset',
- 'collection',
- 'fieldChanged'
+ 'attribute',
+ 'field',
+ 'javascript',
+ 'languageCode',
+ 'login',
+ 'record',
+ 'recordNew',
+ 'role',
+ 'subQuery',
+ 'true',
+ 'value',
+ 'preset',
+ 'collection',
+ 'fieldChanged',
+ 'nowDate',
+ 'nowDatetime',
+ 'nowTime',
+ 'fieldValid',
+ 'formChanged',
+ 'variable',
+ 'getter',
+ 'formState'
);
--
--- TOC entry 1023 (class 1247 OID 18159)
+-- TOC entry 817 (class 1247 OID 16684)
-- Name: form_function_event; Type: TYPE; Schema: app; Owner: -
--
CREATE TYPE app.form_function_event AS ENUM (
- 'open',
- 'save',
- 'delete'
+ 'open',
+ 'save',
+ 'delete'
+);
+
+
+--
+-- TOC entry 1167 (class 1247 OID 18838)
+-- Name: open_form_context; Type: TYPE; Schema: app; Owner: -
+--
+
+CREATE TYPE app.open_form_context AS ENUM (
+ 'bulk'
+);
+
+
+--
+-- TOC entry 1170 (class 1247 OID 18842)
+-- Name: open_form_pop_up_type; Type: TYPE; Schema: app; Owner: -
+--
+
+CREATE TYPE app.open_form_pop_up_type AS ENUM (
+ 'float',
+ 'inline'
);
--
--- TOC entry 771 (class 1247 OID 16616)
+-- TOC entry 820 (class 1247 OID 16692)
-- Name: pg_function_schedule_interval; Type: TYPE; Schema: app; Owner: -
--
CREATE TYPE app.pg_function_schedule_interval AS ENUM (
- 'seconds',
- 'minutes',
- 'hours',
- 'days',
- 'weeks',
- 'months',
- 'years',
- 'once'
+ 'seconds',
+ 'minutes',
+ 'hours',
+ 'days',
+ 'weeks',
+ 'months',
+ 'years',
+ 'once'
+);
+
+
+--
+-- TOC entry 1254 (class 1247 OID 19434)
+-- Name: pg_function_volatility; Type: TYPE; Schema: app; Owner: -
+--
+
+CREATE TYPE app.pg_function_volatility AS ENUM (
+ 'VOLATILE',
+ 'STABLE',
+ 'IMMUTABLE'
);
--
--- TOC entry 774 (class 1247 OID 16632)
+-- TOC entry 1147 (class 1247 OID 18742)
+-- Name: pg_index_method; Type: TYPE; Schema: app; Owner: -
+--
+
+CREATE TYPE app.pg_index_method AS ENUM (
+ 'BTREE',
+ 'GIN'
+);
+
+
+--
+-- TOC entry 823 (class 1247 OID 16710)
-- Name: pg_trigger_fires; Type: TYPE; Schema: app; Owner: -
--
CREATE TYPE app.pg_trigger_fires AS ENUM (
- 'AFTER',
- 'BEFORE'
+ 'AFTER',
+ 'BEFORE'
);
--
--- TOC entry 777 (class 1247 OID 16662)
+-- TOC entry 826 (class 1247 OID 16716)
-- Name: query_join_connector; Type: TYPE; Schema: app; Owner: -
--
CREATE TYPE app.query_join_connector AS ENUM (
- 'INNER',
- 'LEFT',
- 'RIGHT',
- 'FULL',
- 'CROSS'
+ 'INNER',
+ 'LEFT',
+ 'RIGHT',
+ 'FULL',
+ 'CROSS'
);
--
--- TOC entry 780 (class 1247 OID 16674)
+-- TOC entry 829 (class 1247 OID 16728)
-- Name: role_access_content; Type: TYPE; Schema: app; Owner: -
--
CREATE TYPE app.role_access_content AS ENUM (
- 'none',
- 'read',
- 'write'
+ 'none',
+ 'read',
+ 'write'
);
--
--- TOC entry 1043 (class 1247 OID 18358)
+-- TOC entry 832 (class 1247 OID 16736)
-- Name: role_content; Type: TYPE; Schema: app; Owner: -
--
CREATE TYPE app.role_content AS ENUM (
- 'admin',
- 'everyone',
- 'other',
- 'user'
+ 'admin',
+ 'everyone',
+ 'other',
+ 'user'
+);
+
+
+--
+-- TOC entry 1106 (class 1247 OID 16642)
+-- Name: state_effect; Type: TYPE; Schema: app; Owner: -
+--
+
+CREATE TYPE app.state_effect AS ENUM (
+ 'default',
+ 'hidden',
+ 'readonly',
+ 'required',
+ 'optional'
+);
+
+
+--
+-- TOC entry 1209 (class 1247 OID 19164)
+-- Name: admin_mail_reason; Type: TYPE; Schema: instance; Owner: -
+--
+
+CREATE TYPE instance.admin_mail_reason AS ENUM (
+ 'licenseExpiration',
+ 'oauthClientExpiration'
+);
+
+
+--
+-- TOC entry 1283 (class 1247 OID 19655)
+-- Name: align_horizontal; Type: TYPE; Schema: instance; Owner: -
+--
+
+CREATE TYPE instance.align_horizontal AS ENUM (
+ 'left',
+ 'center',
+ 'right'
);
--
--- TOC entry 783 (class 1247 OID 16682)
+-- TOC entry 1103 (class 1247 OID 18393)
+-- Name: file_meta; Type: TYPE; Schema: instance; Owner: -
+--
+
+CREATE TYPE instance.file_meta AS (
+ id uuid,
+ login_id_creator integer,
+ hash text,
+ name text,
+ size_kb integer,
+ version integer,
+ date_change bigint,
+ date_delete bigint,
+ user_id_creator integer
+);
+
+
+--
+-- TOC entry 835 (class 1247 OID 16746)
-- Name: log_context; Type: TYPE; Schema: instance; Owner: -
--
CREATE TYPE instance.log_context AS ENUM (
- 'backup',
- 'cache',
- 'csv',
- 'ldap',
- 'mail',
- 'scheduler',
- 'server',
- 'transfer',
- 'module',
- 'cluster'
+ 'backup',
+ 'cache',
+ 'csv',
+ 'ldap',
+ 'mail',
+ 'scheduler',
+ 'server',
+ 'transfer',
+ 'module',
+ 'cluster',
+ 'imager',
+ 'websocket',
+ 'api'
);
--
--- TOC entry 786 (class 1247 OID 16700)
--- Name: login_setting_border_corner; Type: TYPE; Schema: instance; Owner: -
+-- TOC entry 1236 (class 1247 OID 19365)
+-- Name: login_session_device; Type: TYPE; Schema: instance; Owner: -
--
-CREATE TYPE instance.login_setting_border_corner AS ENUM (
- 'keep',
- 'rounded',
- 'squared'
+CREATE TYPE instance.login_session_device AS ENUM (
+ 'browser',
+ 'fatClient'
);
--
--- TOC entry 1051 (class 1247 OID 18418)
+-- TOC entry 838 (class 1247 OID 16776)
-- Name: login_setting_font_family; Type: TYPE; Schema: instance; Owner: -
--
CREATE TYPE instance.login_setting_font_family AS ENUM (
- 'calibri',
- 'comic_sans_ms',
- 'consolas',
- 'georgia',
- 'helvetica',
- 'lucida_console',
- 'segoe_script',
- 'segoe_ui',
- 'times_new_roman',
- 'trebuchet_ms',
- 'verdana'
+ 'calibri',
+ 'comic_sans_ms',
+ 'consolas',
+ 'georgia',
+ 'helvetica',
+ 'lucida_console',
+ 'segoe_script',
+ 'segoe_ui',
+ 'times_new_roman',
+ 'trebuchet_ms',
+ 'verdana'
);
--
--- TOC entry 1054 (class 1247 OID 18442)
+-- TOC entry 841 (class 1247 OID 16800)
-- Name: login_setting_pattern; Type: TYPE; Schema: instance; Owner: -
--
CREATE TYPE instance.login_setting_pattern AS ENUM (
- 'bubbles',
- 'waves'
+ 'bubbles',
+ 'waves',
+ 'circuits',
+ 'cubes',
+ 'triangles'
);
--
--- TOC entry 989 (class 1247 OID 17832)
+-- TOC entry 844 (class 1247 OID 16807)
-- Name: mail; Type: TYPE; Schema: instance; Owner: -
--
@@ -718,53 +988,164 @@ CREATE TYPE instance.mail AS (
--
--- TOC entry 992 (class 1247 OID 17844)
+-- TOC entry 1173 (class 1247 OID 18852)
+-- Name: mail_account_auth_method; Type: TYPE; Schema: instance; Owner: -
+--
+
+CREATE TYPE instance.mail_account_auth_method AS ENUM (
+ 'plain',
+ 'login',
+ 'xoauth2'
+);
+
+
+--
+-- TOC entry 847 (class 1247 OID 16809)
-- Name: mail_account_mode; Type: TYPE; Schema: instance; Owner: -
--
CREATE TYPE instance.mail_account_mode AS ENUM (
- 'imap',
- 'smtp'
+ 'imap',
+ 'smtp'
+);
+
+
+--
+-- TOC entry 1153 (class 1247 OID 18771)
+-- Name: rest_method; Type: TYPE; Schema: instance; Owner: -
+--
+
+CREATE TYPE instance.rest_method AS ENUM (
+ 'DELETE',
+ 'GET',
+ 'PATCH',
+ 'POST',
+ 'PUT'
);
--
--- TOC entry 789 (class 1247 OID 16711)
+-- TOC entry 850 (class 1247 OID 16814)
-- Name: token_fixed_context; Type: TYPE; Schema: instance; Owner: -
--
CREATE TYPE instance.token_fixed_context AS ENUM (
- 'ics'
+ 'ics',
+ 'client',
+ 'totp'
+);
+
+
+--
+-- TOC entry 1247 (class 1247 OID 19405)
+-- Name: user_data; Type: TYPE; Schema: instance; Owner: -
+--
+
+CREATE TYPE instance.user_data AS (
+ id integer,
+ is_active boolean,
+ is_admin boolean,
+ is_limited boolean,
+ is_public boolean,
+ username character varying(128),
+ department character varying(512),
+ email character varying(512),
+ location character varying(512),
+ name_display character varying(512),
+ name_fore character varying(512),
+ name_sur character varying(512),
+ notes character varying(8196),
+ organization character varying(512),
+ phone_fax character varying(512),
+ phone_landline character varying(512),
+ phone_mobile character varying(512)
+);
+
+
+--
+-- TOC entry 1193 (class 1247 OID 19018)
+-- Name: widget_content; Type: TYPE; Schema: instance; Owner: -
+--
+
+CREATE TYPE instance.widget_content AS ENUM (
+ 'moduleWidget',
+ 'systemModuleMenu'
);
--
--- TOC entry 1057 (class 1247 OID 18450)
+-- TOC entry 853 (class 1247 OID 16818)
-- Name: node_event_content; Type: TYPE; Schema: instance_cluster; Owner: -
--
CREATE TYPE instance_cluster.node_event_content AS ENUM (
- 'collectionUpdated',
- 'configChanged',
- 'loginDisabled',
- 'loginReauthorized',
- 'loginReauthorizedAll',
- 'masterAssigned',
- 'schemaChanged',
- 'shutdownTriggered',
- 'tasksChanged',
- 'taskTriggered'
+ 'collectionUpdated',
+ 'configChanged',
+ 'loginDisabled',
+ 'loginReauthorized',
+ 'loginReauthorizedAll',
+ 'masterAssigned',
+ 'schemaChanged',
+ 'shutdownTriggered',
+ 'tasksChanged',
+ 'taskTriggered',
+ 'filesCopied',
+ 'fileRequested',
+ 'jsFunctionCalled',
+ 'clientEventsChanged',
+ 'keystrokesRequested'
);
--
--- TOC entry 307 (class 1255 OID 17881)
+-- TOC entry 347 (class 1255 OID 18727)
+-- Name: get_preset_ids_inside_queries(uuid[]); Type: FUNCTION; Schema: app; Owner: -
+--
+
+CREATE FUNCTION app.get_preset_ids_inside_queries(query_ids_in uuid[]) RETURNS uuid[]
+ LANGUAGE plpgsql IMMUTABLE
+ AS $$
+ DECLARE
+ preset_ids UUID[];
+ query_ids_sub UUID[];
+ BEGIN
+ IF array_length(query_ids_in,1) = 0 THEN
+ RETURN preset_ids;
+ END IF;
+
+ -- collect preset directly
+ SELECT ARRAY_AGG(preset_id) INTO preset_ids
+ FROM app.query_filter_side
+ WHERE query_id = ANY(query_ids_in)
+ AND content = 'preset';
+
+ -- collect presets from filters inside sub queries
+ SELECT ARRAY_AGG(q.id) INTO query_ids_sub
+ FROM app.query_filter_side AS s
+ JOIN app.query AS q
+ ON q.query_filter_query_id = s.query_id
+ AND q.query_filter_position = s.query_filter_position
+ AND q.query_filter_side = s.side
+ WHERE s.query_id = ANY(query_ids_in)
+ AND s.content = 'subQuery';
+
+ IF array_length(query_ids_sub,1) <> 0 THEN
+ preset_ids := array_cat(preset_ids, app.get_preset_ids_inside_queries(query_ids_sub));
+ END IF;
+
+ RETURN preset_ids;
+ END;
+ $$;
+
+
+--
+-- TOC entry 319 (class 1255 OID 16839)
-- Name: abort_show_message(text); Type: FUNCTION; Schema: instance; Owner: -
--
CREATE FUNCTION instance.abort_show_message(message text) RETURNS void
- LANGUAGE plpgsql
- AS $$
+ LANGUAGE plpgsql
+ AS $$
DECLARE
BEGIN
RAISE EXCEPTION 'R3_MSG: %', message;
@@ -773,13 +1154,13 @@ CREATE FUNCTION instance.abort_show_message(message text) RETURNS void
--
--- TOC entry 313 (class 1255 OID 18339)
+-- TOC entry 320 (class 1255 OID 16840)
-- Name: clean_up_e2ee_keys(integer, uuid, integer[]); Type: FUNCTION; Schema: instance; Owner: -
--
CREATE FUNCTION instance.clean_up_e2ee_keys(login_id integer, relation_id uuid, record_ids_access integer[]) RETURNS void
- LANGUAGE plpgsql
- AS $_$
+ LANGUAGE plpgsql
+ AS $_$
DECLARE
BEGIN
EXECUTE '
@@ -795,13 +1176,137 @@ CREATE FUNCTION instance.clean_up_e2ee_keys(login_id integer, relation_id uuid,
--
--- TOC entry 283 (class 1255 OID 16713)
+-- TOC entry 345 (class 1255 OID 18395)
+-- Name: file_link(uuid, text, uuid, bigint); Type: FUNCTION; Schema: instance; Owner: -
+--
+
+CREATE FUNCTION instance.file_link(file_id uuid, file_name text, attribute_id uuid, record_id bigint) RETURNS void
+ LANGUAGE plpgsql
+ AS $_$
+ DECLARE
+ BEGIN
+ EXECUTE FORMAT(
+ 'INSERT INTO instance_file.%I (record_id, file_id, name) VALUES ($1, $2, $3)',
+ CONCAT(attribute_id::TEXT, '_record')
+ ) USING record_id, file_id, file_name;
+ END;
+ $_$;
+
+
+--
+-- TOC entry 363 (class 1255 OID 19672)
+-- Name: file_unlink(uuid, uuid, bigint); Type: FUNCTION; Schema: instance; Owner: -
+--
+
+CREATE FUNCTION instance.file_unlink(file_id uuid, attribute_id uuid, record_id bigint) RETURNS void
+ LANGUAGE plpgsql
+ AS $_$
+ DECLARE
+ BEGIN
+ EXECUTE FORMAT(
+ 'DELETE FROM instance_file.%I
+ WHERE file_id = $1
+ AND record_id = $2',
+ CONCAT(attribute_id::TEXT, '_record')
+ ) USING file_id, record_id;
+ END;
+ $_$;
+
+
+--
+-- TOC entry 346 (class 1255 OID 18394)
+-- Name: files_get(uuid, bigint, boolean); Type: FUNCTION; Schema: instance; Owner: -
+--
+
+CREATE FUNCTION instance.files_get(attribute_id uuid, record_id bigint, include_deleted boolean DEFAULT false) RETURNS instance.file_meta[]
+ LANGUAGE plpgsql STABLE
+ AS $_$
+ DECLARE
+ file instance.file_meta;
+ files instance.file_meta[];
+ rec RECORD;
+ BEGIN
+ FOR rec IN
+ EXECUTE FORMAT('
+ SELECT r.file_id, r.name, v.login_id, v.hash, v.version, v.size_kb, v.date_change, r.date_delete
+ FROM instance_file.%I AS r
+ JOIN instance.file_version AS v
+ ON v.file_id = r.file_id
+ AND v.version = (
+ SELECT MAX(s.version)
+ FROM instance.file_version AS s
+ WHERE s.file_id = r.file_id
+ )
+ WHERE r.record_id = $1
+ AND ($2 OR r.date_delete IS NULL)
+ ', CONCAT(attribute_id::TEXT,'_record')) USING record_id, include_deleted
+ LOOP
+ file.id := rec.file_id;
+ file.login_id_creator := rec.login_id; -- for calls 'DELETED' AND _event <> 'UPDATED' THEN
+ RETURN;
+ END IF;
+
+ _sql := FORMAT('SELECT "%s"."%s"($1,$2)', _module_name, _pg_function_name);
+
+ FOR _rec IN (
+ SELECT
+ l.id,
+ l.name,
+ l.active,
+ l.admin,
+ l.limited,
+ l.no_auth,
+ m.department,
+ m.email,
+ m.location,
+ m.name_display,
+ m.name_fore,
+ m.name_sur,
+ m.notes,
+ m.organization,
+ m.phone_fax,
+ m.phone_mobile,
+ m.phone_landline
+ FROM instance.login AS l
+ LEFT JOIN instance.login_meta AS m ON m.login_id = l.id
+ WHERE _login_id IS NULL
+ OR _login_id = l.id
+ ) LOOP
+ -- login
+ _d.id := _rec.id;
+ _d.username := _rec.name;
+ _d.is_active := _rec.active;
+ _d.is_admin := _rec.admin;
+ _d.is_limited := _rec.limited;
+ _d.is_public := _rec.no_auth;
+
+ -- meta
+ _d.department := COALESCE(_rec.department, '');
+ _d.email := COALESCE(_rec.email, '');
+ _d.location := COALESCE(_rec.location, '');
+ _d.name_display := COALESCE(_rec.name_display, '');
+ _d.name_fore := COALESCE(_rec.name_fore, '');
+ _d.name_sur := COALESCE(_rec.name_sur, '');
+ _d.notes := COALESCE(_rec.notes, '');
+ _d.organization := COALESCE(_rec.organization, '');
+ _d.phone_fax := COALESCE(_rec.phone_fax, '');
+ _d.phone_mobile := COALESCE(_rec.phone_mobile, '');
+ _d.phone_landline := COALESCE(_rec.phone_landline, '');
+
+ EXECUTE _sql USING _event, _d;
+ END LOOP;
+ END;
+ $_$;
+
+
+--
+-- TOC entry 359 (class 1255 OID 19427)
+-- Name: user_sync_all(uuid); Type: FUNCTION; Schema: instance; Owner: -
+--
+
+CREATE FUNCTION instance.user_sync_all(_module_id uuid) RETURNS integer
+ LANGUAGE plpgsql
+ AS $$
+ DECLARE
+ _module_name TEXT;
+ _pg_function_name TEXT;
+ BEGIN
+ -- resolve entity names
+ SELECT
+ m.name, (
+ SELECT name
+ FROM app.pg_function
+ WHERE module_id = m.id
+ AND id = m.pg_function_id_login_sync
+ )
+ INTO
+ _module_name,
+ _pg_function_name
+ FROM app.module AS m
+ WHERE m.id = _module_id;
+
+ IF _module_name IS NULL OR _pg_function_name IS NULL THEN
+ RETURN 1;
+ END IF;
+
+ PERFORM instance.user_sync(
+ _module_name,
+ _pg_function_name,
+ NULL,
+ 'UPDATED'
+ );
+ RETURN 0;
+ END;
+ $$;
+
+
+--
+-- TOC entry 342 (class 1255 OID 16857)
+-- Name: master_role_request(uuid); Type: FUNCTION; Schema: instance_cluster; Owner: -
+--
+
+CREATE FUNCTION instance_cluster.master_role_request(node_id_requested uuid) RETURNS integer
+ LANGUAGE plpgsql
+ AS $$
+ DECLARE
+ master_missing_after INT;
+ unix_master_check_in BIGINT;
+ BEGIN
+ SELECT value::INT INTO master_missing_after
+ FROM instance.config
+ WHERE name = 'clusterNodeMissingAfter';
+
+ SELECT date_check_in INTO unix_master_check_in
+ FROM instance_cluster.node
+ WHERE cluster_master;
+
+ IF EXTRACT(EPOCH FROM NOW()) < unix_master_check_in + master_missing_after THEN
+ -- current master is not missing
+ RETURN 0;
+ END IF;
+
+ -- new master accepted, switch over
+ UPDATE instance_cluster.node
+ SET cluster_master = FALSE;
- SELECT date_check_in INTO unix_master_check_in
- FROM instance_cluster.node
- WHERE cluster_master;
-
- IF EXTRACT(EPOCH FROM NOW()) < unix_master_check_in + master_missing_after THEN
- -- current master is not missing
- RETURN 0;
- END IF;
-
- -- new master accepted, switch over
- UPDATE instance_cluster.node
- SET cluster_master = FALSE;
-
- UPDATE instance_cluster.node
- SET cluster_master = TRUE
- WHERE id = node_id_requested;
-
- -- assign master switch over tasks to all nodes
- INSERT INTO instance_cluster.node_event (node_id,content,payload)
- SELECT id, 'masterAssigned', '{"state":false}'
- FROM instance_cluster.node
- WHERE cluster_master = FALSE;
-
- INSERT INTO instance_cluster.node_event (node_id,content,payload)
- VALUES (node_id_requested, 'masterAssigned', '{"state":true}');
+ UPDATE instance_cluster.node
+ SET cluster_master = TRUE
+ WHERE id = node_id_requested;
+
+ -- assign master switch over tasks to all nodes
+ INSERT INTO instance_cluster.node_event (node_id,content,payload)
+ SELECT id, 'masterAssigned', '{"state":false}'
+ FROM instance_cluster.node
+ WHERE cluster_master = FALSE;
+
+ INSERT INTO instance_cluster.node_event (node_id,content,payload)
+ VALUES (node_id_requested, 'masterAssigned', '{"state":true}');
RETURN 0;
END;
@@ -1238,13 +2004,13 @@ CREATE FUNCTION instance_cluster.master_role_request(node_id_requested uuid) RET
--
--- TOC entry 289 (class 1255 OID 18529)
+-- TOC entry 343 (class 1255 OID 16858)
-- Name: run_task(text, uuid, uuid); Type: FUNCTION; Schema: instance_cluster; Owner: -
--
CREATE FUNCTION instance_cluster.run_task(task_name text, pg_function_id uuid, pg_function_schedule_id uuid) RETURNS integer
- LANGUAGE plpgsql
- AS $$
+ LANGUAGE plpgsql
+ AS $$
DECLARE
needs_master BOOLEAN;
BEGIN
@@ -1279,26 +2045,26 @@ CREATE FUNCTION instance_cluster.run_task(task_name text, pg_function_id uuid, p
--
--- TOC entry 291 (class 1255 OID 16721)
+-- TOC entry 344 (class 1255 OID 16859)
-- Name: first_agg(anyelement, anyelement); Type: FUNCTION; Schema: public; Owner: -
--
CREATE FUNCTION public.first_agg(anyelement, anyelement) RETURNS anyelement
- LANGUAGE sql IMMUTABLE STRICT PARALLEL SAFE
- AS $_$
- SELECT $1;
+ LANGUAGE sql IMMUTABLE STRICT PARALLEL SAFE
+ AS $_$
+ SELECT $1;
$_$;
--
--- TOC entry 1073 (class 1255 OID 16722)
+-- TOC entry 1288 (class 1255 OID 16860)
-- Name: first(anyelement); Type: AGGREGATE; Schema: public; Owner: -
--
CREATE AGGREGATE public.first(anyelement) (
- SFUNC = public.first_agg,
- STYPE = anyelement,
- PARALLEL = safe
+ SFUNC = public.first_agg,
+ STYPE = anyelement,
+ PARALLEL = safe
);
@@ -1307,1291 +2073,1921 @@ SET default_tablespace = '';
SET default_table_access_method = heap;
--
--- TOC entry 204 (class 1259 OID 16723)
+-- TOC entry 292 (class 1259 OID 18651)
+-- Name: api; Type: TABLE; Schema: app; Owner: -
+--
+
+CREATE TABLE app.api (
+ id uuid NOT NULL,
+ module_id uuid NOT NULL,
+ name character varying(64) NOT NULL,
+ comment text,
+ has_delete boolean NOT NULL,
+ has_get boolean NOT NULL,
+ has_post boolean NOT NULL,
+ limit_def integer NOT NULL,
+ limit_max integer NOT NULL,
+ verbose_def boolean NOT NULL,
+ version integer NOT NULL
+);
+
+
+--
+-- TOC entry 288 (class 1259 OID 18435)
+-- Name: article; Type: TABLE; Schema: app; Owner: -
+--
+
+CREATE TABLE app.article (
+ id uuid NOT NULL,
+ module_id uuid NOT NULL,
+ name character varying(64) NOT NULL
+);
+
+
+--
+-- TOC entry 289 (class 1259 OID 18449)
+-- Name: article_form; Type: TABLE; Schema: app; Owner: -
+--
+
+CREATE TABLE app.article_form (
+ article_id uuid NOT NULL,
+ form_id uuid NOT NULL,
+ "position" smallint NOT NULL
+);
+
+
+--
+-- TOC entry 290 (class 1259 OID 18464)
+-- Name: article_help; Type: TABLE; Schema: app; Owner: -
+--
+
+CREATE TABLE app.article_help (
+ article_id uuid NOT NULL,
+ module_id uuid NOT NULL,
+ "position" smallint NOT NULL
+);
+
+
+--
+-- TOC entry 206 (class 1259 OID 16861)
-- Name: attribute; Type: TABLE; Schema: app; Owner: -
--
CREATE TABLE app.attribute (
- id uuid NOT NULL,
- relation_id uuid NOT NULL,
- relationship_id uuid,
- icon_id uuid,
- name character varying(32) NOT NULL,
- length integer,
- content app.attribute_content NOT NULL,
- encrypted boolean NOT NULL,
- def text NOT NULL,
- nullable boolean NOT NULL,
- on_update app.attribute_fk_actions,
- on_delete app.attribute_fk_actions
+ id uuid NOT NULL,
+ relation_id uuid NOT NULL,
+ relationship_id uuid,
+ icon_id uuid,
+ name character varying(60) NOT NULL,
+ length integer,
+ content app.attribute_content NOT NULL,
+ encrypted boolean NOT NULL,
+ def text NOT NULL,
+ nullable boolean NOT NULL,
+ on_update app.attribute_fk_actions,
+ on_delete app.attribute_fk_actions,
+ content_use app.attribute_content_use NOT NULL,
+ length_fract integer DEFAULT 0
);
--
--- TOC entry 205 (class 1259 OID 16729)
+-- TOC entry 207 (class 1259 OID 16867)
-- Name: caption; Type: TABLE; Schema: app; Owner: -
--
CREATE TABLE app.caption (
- module_id uuid,
- attribute_id uuid,
- form_id uuid,
- field_id uuid,
- column_id uuid,
- role_id uuid,
- menu_id uuid,
- query_choice_id uuid,
- pg_function_id uuid,
- js_function_id uuid,
- login_form_id uuid,
- language_code character(5) NOT NULL,
- content app.caption_content NOT NULL,
- value text NOT NULL
+ module_id uuid,
+ attribute_id uuid,
+ form_id uuid,
+ field_id uuid,
+ column_id uuid,
+ role_id uuid,
+ menu_id uuid,
+ query_choice_id uuid,
+ pg_function_id uuid,
+ js_function_id uuid,
+ login_form_id uuid,
+ language_code character(5) NOT NULL,
+ content app.caption_content NOT NULL,
+ value text NOT NULL,
+ tab_id uuid,
+ article_id uuid,
+ widget_id uuid,
+ form_action_id uuid,
+ client_event_id uuid,
+ menu_tab_id uuid
+);
+
+
+--
+-- TOC entry 308 (class 1259 OID 19285)
+-- Name: client_event; Type: TABLE; Schema: app; Owner: -
+--
+
+CREATE TABLE app.client_event (
+ id uuid NOT NULL,
+ module_id uuid NOT NULL,
+ action app.client_event_action NOT NULL,
+ arguments app.client_event_argument[],
+ event app.client_event_event NOT NULL,
+ hotkey_modifier1 app.client_event_hotkey_modifier NOT NULL,
+ hotkey_modifier2 app.client_event_hotkey_modifier,
+ hotkey_char character(1) NOT NULL,
+ js_function_id uuid,
+ pg_function_id uuid
);
--
--- TOC entry 276 (class 1259 OID 18182)
+-- TOC entry 208 (class 1259 OID 16873)
-- Name: collection; Type: TABLE; Schema: app; Owner: -
--
CREATE TABLE app.collection (
- id uuid NOT NULL,
- module_id uuid NOT NULL,
- icon_id uuid,
- name character varying(64) NOT NULL
+ id uuid NOT NULL,
+ module_id uuid NOT NULL,
+ icon_id uuid,
+ name character varying(64) NOT NULL
);
--
--- TOC entry 277 (class 1259 OID 18227)
+-- TOC entry 209 (class 1259 OID 16876)
-- Name: collection_consumer; Type: TABLE; Schema: app; Owner: -
--
CREATE TABLE app.collection_consumer (
- id uuid NOT NULL,
- collection_id uuid NOT NULL,
- column_id_display uuid,
- field_id uuid,
- menu_id uuid,
- content text NOT NULL,
- multi_value boolean NOT NULL,
- no_display_empty boolean NOT NULL,
- on_mobile boolean NOT NULL
+ id uuid NOT NULL,
+ collection_id uuid NOT NULL,
+ column_id_display uuid,
+ field_id uuid,
+ menu_id uuid,
+ content app.collection_consumer_content NOT NULL,
+ multi_value boolean,
+ no_display_empty boolean,
+ on_mobile boolean NOT NULL,
+ widget_id uuid,
+ flags text[] NOT NULL
);
--
--- TOC entry 206 (class 1259 OID 16735)
+-- TOC entry 210 (class 1259 OID 16882)
-- Name: column; Type: TABLE; Schema: app; Owner: -
--
CREATE TABLE app."column" (
- id uuid NOT NULL,
- collection_id uuid,
- field_id uuid,
- attribute_id uuid NOT NULL,
- aggregator app.aggregator,
- basis smallint NOT NULL,
- batch integer,
- display app.data_display NOT NULL,
- length smallint NOT NULL,
- "position" smallint NOT NULL,
- clipboard boolean NOT NULL,
- distincted boolean NOT NULL,
- index smallint NOT NULL,
- group_by boolean NOT NULL,
- on_mobile boolean NOT NULL,
- sub_query boolean NOT NULL,
- wrap boolean NOT NULL,
- CONSTRAINT column_single_parent CHECK (((field_id IS NULL) <> (collection_id IS NULL)))
-);
-
-
---
--- TOC entry 207 (class 1259 OID 16738)
+ id uuid NOT NULL,
+ collection_id uuid,
+ field_id uuid,
+ attribute_id uuid NOT NULL,
+ aggregator app.aggregator,
+ basis smallint NOT NULL,
+ batch integer,
+ display app.data_display NOT NULL,
+ length smallint NOT NULL,
+ "position" smallint NOT NULL,
+ distincted boolean NOT NULL,
+ index smallint NOT NULL,
+ group_by boolean NOT NULL,
+ on_mobile boolean NOT NULL,
+ sub_query boolean NOT NULL,
+ api_id uuid,
+ styles app.column_style[] NOT NULL,
+ hidden boolean NOT NULL,
+ CONSTRAINT column_single_parent CHECK ((1 = ((
+CASE
+ WHEN (api_id IS NULL) THEN 0
+ ELSE 1
+END +
+CASE
+ WHEN (collection_id IS NULL) THEN 0
+ ELSE 1
+END) +
+CASE
+ WHEN (field_id IS NULL) THEN 0
+ ELSE 1
+END)))
+);
+
+
+--
+-- TOC entry 211 (class 1259 OID 16886)
-- Name: field; Type: TABLE; Schema: app; Owner: -
--
CREATE TABLE app.field (
- id uuid NOT NULL,
- parent_id uuid,
- form_id uuid NOT NULL,
- icon_id uuid,
- content app.field_content NOT NULL,
- "position" smallint NOT NULL,
- on_mobile boolean NOT NULL,
- state app.field_state NOT NULL
+ id uuid NOT NULL,
+ parent_id uuid,
+ form_id uuid NOT NULL,
+ icon_id uuid,
+ content app.field_content NOT NULL,
+ "position" smallint NOT NULL,
+ on_mobile boolean NOT NULL,
+ state app.state_effect NOT NULL,
+ tab_id uuid,
+ flags text[] NOT NULL
);
--
--- TOC entry 208 (class 1259 OID 16741)
+-- TOC entry 212 (class 1259 OID 16889)
-- Name: field_button; Type: TABLE; Schema: app; Owner: -
--
CREATE TABLE app.field_button (
- field_id uuid NOT NULL,
- js_function_id uuid
+ field_id uuid NOT NULL,
+ js_function_id uuid
);
--
--- TOC entry 209 (class 1259 OID 16744)
+-- TOC entry 213 (class 1259 OID 16892)
-- Name: field_calendar; Type: TABLE; Schema: app; Owner: -
--
CREATE TABLE app.field_calendar (
- field_id uuid NOT NULL,
- attribute_id_color uuid,
- attribute_id_date0 uuid NOT NULL,
- attribute_id_date1 uuid NOT NULL,
- index_color integer,
- index_date0 smallint NOT NULL,
- index_date1 smallint NOT NULL,
- date_range0 integer NOT NULL,
- date_range1 integer NOT NULL,
- ics boolean NOT NULL,
- gantt boolean NOT NULL,
- gantt_steps app.field_calendar_gantt_steps,
- gantt_steps_toggle boolean NOT NULL
+ field_id uuid NOT NULL,
+ attribute_id_color uuid,
+ attribute_id_date0 uuid NOT NULL,
+ attribute_id_date1 uuid NOT NULL,
+ index_color integer,
+ index_date0 smallint NOT NULL,
+ index_date1 smallint NOT NULL,
+ date_range0 integer NOT NULL,
+ date_range1 integer NOT NULL,
+ ics boolean NOT NULL,
+ gantt boolean NOT NULL,
+ gantt_steps app.field_calendar_gantt_steps,
+ gantt_steps_toggle boolean NOT NULL,
+ days smallint NOT NULL,
+ days_toggle boolean NOT NULL
);
--
--- TOC entry 269 (class 1259 OID 17931)
+-- TOC entry 214 (class 1259 OID 16895)
-- Name: field_chart; Type: TABLE; Schema: app; Owner: -
--
CREATE TABLE app.field_chart (
- field_id uuid NOT NULL,
- chart_option text NOT NULL
+ field_id uuid NOT NULL,
+ chart_option text NOT NULL
);
--
--- TOC entry 210 (class 1259 OID 16747)
+-- TOC entry 215 (class 1259 OID 16901)
-- Name: field_container; Type: TABLE; Schema: app; Owner: -
--
CREATE TABLE app.field_container (
- field_id uuid NOT NULL,
- direction app.field_container_direction NOT NULL,
- grow smallint NOT NULL,
- shrink smallint NOT NULL,
- basis smallint NOT NULL,
- per_min smallint NOT NULL,
- per_max smallint NOT NULL,
- justify_content app.field_container_justify_content NOT NULL,
- align_items app.field_container_align_items NOT NULL,
- align_content app.field_container_align_content NOT NULL,
- wrap boolean NOT NULL
+ field_id uuid NOT NULL,
+ direction app.field_container_direction NOT NULL,
+ grow smallint NOT NULL,
+ shrink smallint NOT NULL,
+ basis smallint NOT NULL,
+ per_min smallint NOT NULL,
+ per_max smallint NOT NULL,
+ justify_content app.field_container_justify_content NOT NULL,
+ align_items app.field_container_align_items NOT NULL,
+ align_content app.field_container_align_content NOT NULL,
+ wrap boolean NOT NULL
);
--
--- TOC entry 211 (class 1259 OID 16750)
+-- TOC entry 216 (class 1259 OID 16904)
-- Name: field_data; Type: TABLE; Schema: app; Owner: -
--
CREATE TABLE app.field_data (
- field_id uuid NOT NULL,
- attribute_id uuid NOT NULL,
- attribute_id_alt uuid,
- js_function_id uuid,
- def text NOT NULL,
- display app.data_display NOT NULL,
- index smallint NOT NULL,
- min integer,
- max integer,
- regex_check text,
- clipboard boolean NOT NULL
+ field_id uuid NOT NULL,
+ attribute_id uuid NOT NULL,
+ attribute_id_alt uuid,
+ js_function_id uuid,
+ def text NOT NULL,
+ display app.data_display NOT NULL,
+ index smallint NOT NULL,
+ min integer,
+ max integer,
+ regex_check text,
+ clipboard boolean NOT NULL
);
--
--- TOC entry 212 (class 1259 OID 16756)
+-- TOC entry 217 (class 1259 OID 16910)
-- Name: field_data_relationship; Type: TABLE; Schema: app; Owner: -
--
CREATE TABLE app.field_data_relationship (
- field_id uuid NOT NULL,
- attribute_id_nm uuid,
- auto_select smallint NOT NULL,
- category boolean NOT NULL,
- filter_quick boolean NOT NULL,
- outside_in boolean NOT NULL
+ field_id uuid NOT NULL,
+ attribute_id_nm uuid,
+ auto_select smallint NOT NULL,
+ category boolean NOT NULL,
+ filter_quick boolean NOT NULL,
+ outside_in boolean NOT NULL
);
--
--- TOC entry 213 (class 1259 OID 16759)
+-- TOC entry 218 (class 1259 OID 16913)
-- Name: field_data_relationship_preset; Type: TABLE; Schema: app; Owner: -
--
CREATE TABLE app.field_data_relationship_preset (
- field_id uuid NOT NULL,
- preset_id uuid NOT NULL
+ field_id uuid NOT NULL,
+ preset_id uuid NOT NULL
);
--
--- TOC entry 214 (class 1259 OID 16762)
+-- TOC entry 219 (class 1259 OID 16916)
-- Name: field_header; Type: TABLE; Schema: app; Owner: -
--
CREATE TABLE app.field_header (
- field_id uuid NOT NULL,
- size smallint NOT NULL
+ field_id uuid NOT NULL,
+ size smallint NOT NULL,
+ richtext boolean NOT NULL
+);
+
+
+--
+-- TOC entry 298 (class 1259 OID 18875)
+-- Name: field_kanban; Type: TABLE; Schema: app; Owner: -
+--
+
+CREATE TABLE app.field_kanban (
+ field_id uuid NOT NULL,
+ relation_index_data smallint NOT NULL,
+ relation_index_axis_x smallint NOT NULL,
+ relation_index_axis_y smallint,
+ attribute_id_sort uuid
);
--
--- TOC entry 215 (class 1259 OID 16765)
+-- TOC entry 220 (class 1259 OID 16919)
-- Name: field_list; Type: TABLE; Schema: app; Owner: -
--
CREATE TABLE app.field_list (
- field_id uuid NOT NULL,
- auto_renew integer,
- csv_import boolean NOT NULL,
- csv_export boolean NOT NULL,
- filter_quick boolean NOT NULL,
- layout app.field_list_layout NOT NULL,
- result_limit smallint NOT NULL
+ field_id uuid NOT NULL,
+ auto_renew integer,
+ csv_import boolean NOT NULL,
+ csv_export boolean NOT NULL,
+ filter_quick boolean NOT NULL,
+ layout app.field_list_layout NOT NULL,
+ result_limit smallint NOT NULL
+);
+
+
+--
+-- TOC entry 315 (class 1259 OID 19487)
+-- Name: field_variable; Type: TABLE; Schema: app; Owner: -
+--
+
+CREATE TABLE app.field_variable (
+ field_id uuid NOT NULL,
+ variable_id uuid,
+ js_function_id uuid,
+ clipboard boolean NOT NULL
);
--
--- TOC entry 216 (class 1259 OID 16768)
+-- TOC entry 221 (class 1259 OID 16922)
-- Name: form; Type: TABLE; Schema: app; Owner: -
--
CREATE TABLE app.form (
- id uuid NOT NULL,
- module_id uuid NOT NULL,
- icon_id uuid,
- preset_id_open uuid,
- name character varying(64) NOT NULL,
- no_data_actions boolean NOT NULL
+ id uuid NOT NULL,
+ module_id uuid NOT NULL,
+ icon_id uuid,
+ preset_id_open uuid,
+ name character varying(64) NOT NULL,
+ no_data_actions boolean NOT NULL,
+ field_id_focus uuid
+);
+
+
+--
+-- TOC entry 307 (class 1259 OID 19203)
+-- Name: form_action; Type: TABLE; Schema: app; Owner: -
+--
+
+CREATE TABLE app.form_action (
+ id uuid NOT NULL,
+ form_id uuid NOT NULL,
+ js_function_id uuid NOT NULL,
+ icon_id uuid,
+ "position" integer NOT NULL,
+ state app.state_effect NOT NULL,
+ color character(6)
);
--
--- TOC entry 275 (class 1259 OID 18165)
+-- TOC entry 222 (class 1259 OID 16925)
-- Name: form_function; Type: TABLE; Schema: app; Owner: -
--
CREATE TABLE app.form_function (
- form_id uuid NOT NULL,
- "position" integer NOT NULL,
- js_function_id uuid NOT NULL,
- event app.form_function_event NOT NULL,
- event_before boolean NOT NULL
+ form_id uuid NOT NULL,
+ "position" integer NOT NULL,
+ js_function_id uuid NOT NULL,
+ event app.form_function_event NOT NULL,
+ event_before boolean NOT NULL
);
--
--- TOC entry 217 (class 1259 OID 16771)
+-- TOC entry 223 (class 1259 OID 16928)
-- Name: form_state; Type: TABLE; Schema: app; Owner: -
--
CREATE TABLE app.form_state (
- id uuid NOT NULL,
- form_id uuid NOT NULL,
- description text NOT NULL
+ id uuid NOT NULL,
+ form_id uuid NOT NULL,
+ description text NOT NULL
);
--
--- TOC entry 218 (class 1259 OID 16777)
+-- TOC entry 224 (class 1259 OID 16934)
-- Name: form_state_condition; Type: TABLE; Schema: app; Owner: -
--
CREATE TABLE app.form_state_condition (
- form_state_id uuid NOT NULL,
- "position" smallint NOT NULL,
- connector app.condition_connector NOT NULL,
- operator app.condition_operator NOT NULL
+ form_state_id uuid NOT NULL,
+ "position" smallint NOT NULL,
+ connector app.condition_connector NOT NULL,
+ operator app.condition_operator NOT NULL
);
--
--- TOC entry 278 (class 1259 OID 18261)
+-- TOC entry 225 (class 1259 OID 16937)
-- Name: form_state_condition_side; Type: TABLE; Schema: app; Owner: -
--
CREATE TABLE app.form_state_condition_side (
- form_state_id uuid NOT NULL,
- form_state_condition_position smallint NOT NULL,
- collection_id uuid,
- column_id uuid,
- field_id uuid,
- preset_id uuid,
- role_id uuid,
- side smallint NOT NULL,
- brackets smallint NOT NULL,
- content app.filter_side_content NOT NULL,
- value text
+ form_state_id uuid NOT NULL,
+ form_state_condition_position smallint NOT NULL,
+ collection_id uuid,
+ column_id uuid,
+ field_id uuid,
+ preset_id uuid,
+ role_id uuid,
+ side smallint NOT NULL,
+ brackets smallint NOT NULL,
+ content app.filter_side_content NOT NULL,
+ value text,
+ variable_id uuid,
+ form_state_id_result uuid
);
--
--- TOC entry 219 (class 1259 OID 16783)
+-- TOC entry 226 (class 1259 OID 16943)
-- Name: form_state_effect; Type: TABLE; Schema: app; Owner: -
--
CREATE TABLE app.form_state_effect (
- form_state_id uuid NOT NULL,
- field_id uuid NOT NULL,
- new_state app.field_state NOT NULL
+ form_state_id uuid NOT NULL,
+ field_id uuid,
+ new_state app.state_effect NOT NULL,
+ tab_id uuid,
+ form_action_id uuid,
+ new_data smallint NOT NULL
);
--
--- TOC entry 220 (class 1259 OID 16786)
+-- TOC entry 227 (class 1259 OID 16946)
-- Name: icon; Type: TABLE; Schema: app; Owner: -
--
CREATE TABLE app.icon (
- id uuid NOT NULL,
- module_id uuid NOT NULL,
- file bytea NOT NULL
+ id uuid NOT NULL,
+ module_id uuid NOT NULL,
+ file bytea NOT NULL,
+ name character varying(64) NOT NULL
);
--
--- TOC entry 273 (class 1259 OID 18075)
+-- TOC entry 228 (class 1259 OID 16952)
-- Name: js_function; Type: TABLE; Schema: app; Owner: -
--
CREATE TABLE app.js_function (
- id uuid NOT NULL,
- module_id uuid NOT NULL,
- form_id uuid,
- name character varying(64) NOT NULL,
- code_function text NOT NULL,
- code_args text NOT NULL,
- code_returns text NOT NULL
+ id uuid NOT NULL,
+ module_id uuid NOT NULL,
+ form_id uuid,
+ name character varying(64) NOT NULL,
+ code_function text NOT NULL,
+ code_args text NOT NULL,
+ code_returns text NOT NULL,
+ is_client_event_exec boolean DEFAULT false
);
--
--- TOC entry 274 (class 1259 OID 18098)
+-- TOC entry 229 (class 1259 OID 16958)
-- Name: js_function_depends; Type: TABLE; Schema: app; Owner: -
--
CREATE TABLE app.js_function_depends (
- js_function_id uuid NOT NULL,
- js_function_id_on uuid,
- pg_function_id_on uuid,
- field_id_on uuid,
- form_id_on uuid,
- role_id_on uuid,
- collection_id_on uuid
+ js_function_id uuid NOT NULL,
+ js_function_id_on uuid,
+ pg_function_id_on uuid,
+ field_id_on uuid,
+ form_id_on uuid,
+ role_id_on uuid,
+ collection_id_on uuid,
+ variable_id_on uuid
);
--
--- TOC entry 268 (class 1259 OID 17892)
+-- TOC entry 230 (class 1259 OID 16961)
-- Name: login_form; Type: TABLE; Schema: app; Owner: -
--
CREATE TABLE app.login_form (
- id uuid NOT NULL,
- module_id uuid NOT NULL,
- attribute_id_login uuid NOT NULL,
- attribute_id_lookup uuid NOT NULL,
- form_id uuid NOT NULL,
- name character varying(64) NOT NULL
+ id uuid NOT NULL,
+ module_id uuid NOT NULL,
+ attribute_id_login uuid NOT NULL,
+ attribute_id_lookup uuid NOT NULL,
+ form_id uuid NOT NULL,
+ name character varying(64) NOT NULL
);
--
--- TOC entry 221 (class 1259 OID 16792)
+-- TOC entry 231 (class 1259 OID 16964)
-- Name: menu; Type: TABLE; Schema: app; Owner: -
--
CREATE TABLE app.menu (
- id uuid NOT NULL,
- parent_id uuid,
- module_id uuid NOT NULL,
- form_id uuid,
- icon_id uuid,
- "position" smallint NOT NULL,
- show_children boolean NOT NULL
+ id uuid NOT NULL,
+ parent_id uuid,
+ module_id uuid,
+ form_id uuid,
+ icon_id uuid,
+ "position" smallint NOT NULL,
+ show_children boolean NOT NULL,
+ color character(6),
+ menu_tab_id uuid
+);
+
+
+--
+-- TOC entry 316 (class 1259 OID 19563)
+-- Name: menu_tab; Type: TABLE; Schema: app; Owner: -
+--
+
+CREATE TABLE app.menu_tab (
+ id uuid NOT NULL,
+ module_id uuid NOT NULL,
+ icon_id uuid,
+ "position" integer NOT NULL
);
--
--- TOC entry 222 (class 1259 OID 16795)
+-- TOC entry 232 (class 1259 OID 16967)
-- Name: module; Type: TABLE; Schema: app; Owner: -
--
CREATE TABLE app.module (
- id uuid NOT NULL,
- form_id uuid,
- icon_id uuid,
- parent_id uuid,
- name character varying(32) NOT NULL,
- color1 character(6) NOT NULL,
- release_date bigint NOT NULL,
- release_build integer NOT NULL,
- release_build_app integer NOT NULL,
- "position" integer,
- language_main character(5) NOT NULL
+ id uuid NOT NULL,
+ form_id uuid,
+ icon_id uuid,
+ parent_id uuid,
+ name character varying(60) NOT NULL,
+ color1 character(6),
+ release_date bigint NOT NULL,
+ release_build integer NOT NULL,
+ release_build_app integer NOT NULL,
+ "position" integer,
+ language_main character(5) NOT NULL,
+ name_pwa character varying(60),
+ name_pwa_short character varying(12),
+ icon_id_pwa1 uuid,
+ icon_id_pwa2 uuid,
+ pg_function_id_login_sync uuid,
+ js_function_id_on_login uuid
);
--
--- TOC entry 223 (class 1259 OID 16798)
+-- TOC entry 233 (class 1259 OID 16970)
-- Name: module_depends; Type: TABLE; Schema: app; Owner: -
--
CREATE TABLE app.module_depends (
- module_id uuid NOT NULL,
- module_id_on uuid NOT NULL
+ module_id uuid NOT NULL,
+ module_id_on uuid NOT NULL
);
--
--- TOC entry 224 (class 1259 OID 16801)
+-- TOC entry 234 (class 1259 OID 16973)
-- Name: module_language; Type: TABLE; Schema: app; Owner: -
--
CREATE TABLE app.module_language (
- module_id uuid NOT NULL,
- language_code character(5) NOT NULL
+ module_id uuid NOT NULL,
+ language_code character(5) NOT NULL
);
--
--- TOC entry 271 (class 1259 OID 18014)
+-- TOC entry 235 (class 1259 OID 16976)
-- Name: module_start_form; Type: TABLE; Schema: app; Owner: -
--
CREATE TABLE app.module_start_form (
- module_id uuid NOT NULL,
- "position" integer NOT NULL,
- role_id uuid NOT NULL,
- form_id uuid NOT NULL
+ module_id uuid NOT NULL,
+ "position" integer NOT NULL,
+ role_id uuid NOT NULL,
+ form_id uuid NOT NULL
);
--
--- TOC entry 272 (class 1259 OID 18046)
+-- TOC entry 236 (class 1259 OID 16979)
-- Name: open_form; Type: TABLE; Schema: app; Owner: -
--
CREATE TABLE app.open_form (
- field_id uuid,
- column_id uuid,
- form_id_open uuid NOT NULL,
- attribute_id_apply uuid,
- collection_consumer_id uuid,
- max_height integer NOT NULL,
- max_width integer NOT NULL,
- pop_up boolean NOT NULL,
- relation_index integer NOT NULL
+ field_id uuid,
+ column_id uuid,
+ form_id_open uuid NOT NULL,
+ attribute_id_apply uuid,
+ collection_consumer_id uuid,
+ max_height integer NOT NULL,
+ max_width integer NOT NULL,
+ relation_index_apply integer NOT NULL,
+ context app.open_form_context,
+ pop_up_type app.open_form_pop_up_type,
+ relation_index_open integer NOT NULL
);
--
--- TOC entry 225 (class 1259 OID 16804)
+-- TOC entry 237 (class 1259 OID 16982)
-- Name: pg_function; Type: TABLE; Schema: app; Owner: -
--
CREATE TABLE app.pg_function (
- id uuid NOT NULL,
- module_id uuid NOT NULL,
- name character varying(32) NOT NULL,
- code_function text NOT NULL,
- code_args text NOT NULL,
- code_returns text NOT NULL,
- is_frontend_exec boolean NOT NULL,
- is_trigger boolean NOT NULL
+ id uuid NOT NULL,
+ module_id uuid NOT NULL,
+ name character varying(60) NOT NULL,
+ code_function text NOT NULL,
+ code_args text NOT NULL,
+ code_returns text NOT NULL,
+ is_frontend_exec boolean NOT NULL,
+ is_trigger boolean NOT NULL,
+ is_login_sync boolean NOT NULL,
+ volatility app.pg_function_volatility
);
--
--- TOC entry 226 (class 1259 OID 16810)
+-- TOC entry 238 (class 1259 OID 16988)
-- Name: pg_function_depends; Type: TABLE; Schema: app; Owner: -
--
CREATE TABLE app.pg_function_depends (
- pg_function_id uuid NOT NULL,
- pg_function_id_on uuid,
- module_id_on uuid,
- relation_id_on uuid,
- attribute_id_on uuid
+ pg_function_id uuid NOT NULL,
+ pg_function_id_on uuid,
+ module_id_on uuid,
+ relation_id_on uuid,
+ attribute_id_on uuid
);
--
--- TOC entry 227 (class 1259 OID 16813)
+-- TOC entry 239 (class 1259 OID 16991)
-- Name: pg_function_schedule; Type: TABLE; Schema: app; Owner: -
--
CREATE TABLE app.pg_function_schedule (
- id uuid NOT NULL,
- pg_function_id uuid NOT NULL,
- at_hour smallint,
- at_minute smallint,
- at_second smallint,
- at_day smallint,
- interval_type app.pg_function_schedule_interval NOT NULL,
- interval_value integer NOT NULL
+ id uuid NOT NULL,
+ pg_function_id uuid NOT NULL,
+ at_hour smallint,
+ at_minute smallint,
+ at_second smallint,
+ at_day smallint,
+ interval_type app.pg_function_schedule_interval NOT NULL,
+ interval_value integer NOT NULL
);
--
--- TOC entry 228 (class 1259 OID 16816)
+-- TOC entry 240 (class 1259 OID 16994)
-- Name: pg_index; Type: TABLE; Schema: app; Owner: -
--
CREATE TABLE app.pg_index (
- id uuid NOT NULL,
- relation_id uuid NOT NULL,
- auto_fki boolean NOT NULL,
- no_duplicates boolean NOT NULL
+ id uuid NOT NULL,
+ relation_id uuid NOT NULL,
+ auto_fki boolean NOT NULL,
+ no_duplicates boolean NOT NULL,
+ primary_key boolean NOT NULL,
+ method app.pg_index_method NOT NULL,
+ attribute_id_dict uuid
);
--
--- TOC entry 229 (class 1259 OID 16819)
+-- TOC entry 241 (class 1259 OID 16997)
-- Name: pg_index_attribute; Type: TABLE; Schema: app; Owner: -
--
CREATE TABLE app.pg_index_attribute (
- pg_index_id uuid NOT NULL,
- attribute_id uuid NOT NULL,
- "position" smallint NOT NULL,
- order_asc boolean NOT NULL
+ pg_index_id uuid NOT NULL,
+ attribute_id uuid NOT NULL,
+ "position" smallint NOT NULL,
+ order_asc boolean NOT NULL
);
--
--- TOC entry 230 (class 1259 OID 16822)
+-- TOC entry 242 (class 1259 OID 17000)
-- Name: pg_trigger; Type: TABLE; Schema: app; Owner: -
--
CREATE TABLE app.pg_trigger (
- id uuid NOT NULL,
- relation_id uuid NOT NULL,
- pg_function_id uuid NOT NULL,
- code_condition text,
- fires app.pg_trigger_fires NOT NULL,
- is_constraint boolean NOT NULL,
- is_deferrable boolean NOT NULL,
- is_deferred boolean NOT NULL,
- on_insert boolean NOT NULL,
- on_update boolean NOT NULL,
- on_delete boolean NOT NULL,
- per_row boolean NOT NULL
+ id uuid NOT NULL,
+ relation_id uuid NOT NULL,
+ pg_function_id uuid NOT NULL,
+ code_condition text,
+ fires app.pg_trigger_fires NOT NULL,
+ is_constraint boolean NOT NULL,
+ is_deferrable boolean NOT NULL,
+ is_deferred boolean NOT NULL,
+ on_insert boolean NOT NULL,
+ on_update boolean NOT NULL,
+ on_delete boolean NOT NULL,
+ per_row boolean NOT NULL,
+ module_id uuid NOT NULL
);
--
--- TOC entry 231 (class 1259 OID 16828)
+-- TOC entry 243 (class 1259 OID 17006)
-- Name: preset; Type: TABLE; Schema: app; Owner: -
--
CREATE TABLE app.preset (
- id uuid NOT NULL,
- relation_id uuid NOT NULL,
- protected boolean NOT NULL,
- name character varying(32) NOT NULL
+ id uuid NOT NULL,
+ relation_id uuid NOT NULL,
+ protected boolean NOT NULL,
+ name character varying(64) NOT NULL
);
--
--- TOC entry 232 (class 1259 OID 16831)
+-- TOC entry 244 (class 1259 OID 17009)
-- Name: preset_value; Type: TABLE; Schema: app; Owner: -
--
CREATE TABLE app.preset_value (
- id uuid NOT NULL,
- preset_id uuid NOT NULL,
- preset_id_refer uuid,
- attribute_id uuid NOT NULL,
- value text NOT NULL,
- protected boolean NOT NULL
+ id uuid NOT NULL,
+ preset_id uuid NOT NULL,
+ preset_id_refer uuid,
+ attribute_id uuid NOT NULL,
+ value text,
+ protected boolean NOT NULL
);
--
--- TOC entry 233 (class 1259 OID 16837)
+-- TOC entry 245 (class 1259 OID 17015)
-- Name: query; Type: TABLE; Schema: app; Owner: -
--
CREATE TABLE app.query (
- id uuid NOT NULL,
- field_id uuid,
- form_id uuid,
- relation_id uuid NOT NULL,
- column_id uuid,
- collection_id uuid,
- query_filter_query_id uuid,
- query_filter_position smallint,
- query_filter_side smallint,
- fixed_limit integer NOT NULL,
- CONSTRAINT query_single_parent CHECK ((1 = ((((
+ id uuid NOT NULL,
+ field_id uuid,
+ form_id uuid,
+ relation_id uuid NOT NULL,
+ column_id uuid,
+ collection_id uuid,
+ query_filter_query_id uuid,
+ query_filter_position smallint,
+ query_filter_side smallint,
+ fixed_limit integer NOT NULL,
+ api_id uuid,
+ query_filter_index smallint,
+ CONSTRAINT query_single_parent CHECK ((1 = (((((
CASE
- WHEN (collection_id IS NULL) THEN 0
- ELSE 1
+ WHEN (api_id IS NULL) THEN 0
+ ELSE 1
END +
CASE
- WHEN (column_id IS NULL) THEN 0
- ELSE 1
+ WHEN (collection_id IS NULL) THEN 0
+ ELSE 1
+END) +
+CASE
+ WHEN (column_id IS NULL) THEN 0
+ ELSE 1
END) +
CASE
- WHEN (field_id IS NULL) THEN 0
- ELSE 1
+ WHEN (field_id IS NULL) THEN 0
+ ELSE 1
END) +
CASE
- WHEN (form_id IS NULL) THEN 0
- ELSE 1
+ WHEN (form_id IS NULL) THEN 0
+ ELSE 1
END) +
CASE
- WHEN (query_filter_query_id IS NULL) THEN 0
- ELSE 1
+ WHEN (query_filter_query_id IS NULL) THEN 0
+ ELSE 1
END)))
);
--
--- TOC entry 234 (class 1259 OID 16840)
+-- TOC entry 246 (class 1259 OID 17019)
-- Name: query_choice; Type: TABLE; Schema: app; Owner: -
--
CREATE TABLE app.query_choice (
- id uuid NOT NULL,
- query_id uuid NOT NULL,
- name character varying(32) NOT NULL,
- "position" integer
+ id uuid NOT NULL,
+ query_id uuid NOT NULL,
+ name character varying(32) NOT NULL,
+ "position" integer
);
--
--- TOC entry 235 (class 1259 OID 16843)
+-- TOC entry 247 (class 1259 OID 17022)
-- Name: query_filter; Type: TABLE; Schema: app; Owner: -
--
CREATE TABLE app.query_filter (
- query_id uuid NOT NULL,
- "position" smallint NOT NULL,
- query_choice_id uuid,
- connector app.condition_connector NOT NULL,
- operator app.condition_operator NOT NULL
+ query_id uuid NOT NULL,
+ "position" smallint NOT NULL,
+ query_choice_id uuid,
+ connector app.condition_connector NOT NULL,
+ operator app.condition_operator NOT NULL,
+ index smallint NOT NULL
);
--
--- TOC entry 236 (class 1259 OID 16846)
+-- TOC entry 248 (class 1259 OID 17025)
-- Name: query_filter_side; Type: TABLE; Schema: app; Owner: -
--
CREATE TABLE app.query_filter_side (
- query_id uuid NOT NULL,
- query_filter_position smallint NOT NULL,
- role_id uuid,
- attribute_id uuid,
- attribute_index smallint NOT NULL,
- attribute_nested smallint NOT NULL,
- field_id uuid,
- preset_id uuid,
- collection_id uuid,
- column_id uuid,
- brackets smallint NOT NULL,
- content app.filter_side_content NOT NULL,
- query_aggregator app.aggregator,
- side smallint NOT NULL,
- value text
-);
-
-
---
--- TOC entry 237 (class 1259 OID 16852)
+ query_id uuid NOT NULL,
+ query_filter_position smallint NOT NULL,
+ role_id uuid,
+ attribute_id uuid,
+ attribute_index smallint NOT NULL,
+ attribute_nested smallint NOT NULL,
+ field_id uuid,
+ preset_id uuid,
+ collection_id uuid,
+ column_id uuid,
+ brackets smallint NOT NULL,
+ content app.filter_side_content NOT NULL,
+ query_aggregator app.aggregator,
+ side smallint NOT NULL,
+ value text,
+ now_offset integer,
+ variable_id uuid,
+ query_filter_index smallint NOT NULL
+);
+
+
+--
+-- TOC entry 249 (class 1259 OID 17031)
-- Name: query_join; Type: TABLE; Schema: app; Owner: -
--
CREATE TABLE app.query_join (
- query_id uuid NOT NULL,
- relation_id uuid NOT NULL,
- attribute_id uuid,
- apply_create boolean NOT NULL,
- apply_update boolean NOT NULL,
- apply_delete boolean NOT NULL,
- connector app.query_join_connector NOT NULL,
- index_from smallint NOT NULL,
- index smallint NOT NULL,
- "position" smallint NOT NULL
+ query_id uuid NOT NULL,
+ relation_id uuid NOT NULL,
+ attribute_id uuid,
+ apply_create boolean NOT NULL,
+ apply_update boolean NOT NULL,
+ apply_delete boolean NOT NULL,
+ connector app.query_join_connector NOT NULL,
+ index_from smallint NOT NULL,
+ index smallint NOT NULL,
+ "position" smallint NOT NULL
);
--
--- TOC entry 238 (class 1259 OID 16855)
+-- TOC entry 250 (class 1259 OID 17034)
-- Name: query_lookup; Type: TABLE; Schema: app; Owner: -
--
CREATE TABLE app.query_lookup (
- query_id uuid NOT NULL,
- pg_index_id uuid NOT NULL,
- index smallint NOT NULL
+ query_id uuid NOT NULL,
+ pg_index_id uuid NOT NULL,
+ index smallint NOT NULL
);
--
--- TOC entry 239 (class 1259 OID 16858)
+-- TOC entry 251 (class 1259 OID 17037)
-- Name: query_order; Type: TABLE; Schema: app; Owner: -
--
CREATE TABLE app.query_order (
- query_id uuid NOT NULL,
- attribute_id uuid NOT NULL,
- "position" smallint NOT NULL,
- ascending boolean NOT NULL,
- index smallint NOT NULL
+ query_id uuid NOT NULL,
+ attribute_id uuid NOT NULL,
+ "position" smallint NOT NULL,
+ ascending boolean NOT NULL,
+ index smallint NOT NULL
);
--
--- TOC entry 240 (class 1259 OID 16861)
+-- TOC entry 252 (class 1259 OID 17040)
-- Name: relation; Type: TABLE; Schema: app; Owner: -
--
CREATE TABLE app.relation (
- id uuid NOT NULL,
- module_id uuid NOT NULL,
- name character varying(32) NOT NULL,
- encryption boolean NOT NULL,
- retention_count integer,
- retention_days integer
+ id uuid NOT NULL,
+ module_id uuid NOT NULL,
+ name character varying(60) NOT NULL,
+ encryption boolean NOT NULL,
+ retention_count integer,
+ retention_days integer,
+ comment text
);
--
--- TOC entry 270 (class 1259 OID 17980)
+-- TOC entry 253 (class 1259 OID 17043)
-- Name: relation_policy; Type: TABLE; Schema: app; Owner: -
--
CREATE TABLE app.relation_policy (
- relation_id uuid NOT NULL,
- "position" smallint NOT NULL,
- role_id uuid NOT NULL,
- pg_function_id_excl uuid,
- pg_function_id_incl uuid,
- action_delete boolean NOT NULL,
- action_select boolean NOT NULL,
- action_update boolean NOT NULL
+ relation_id uuid NOT NULL,
+ "position" smallint NOT NULL,
+ role_id uuid NOT NULL,
+ pg_function_id_excl uuid,
+ pg_function_id_incl uuid,
+ action_delete boolean NOT NULL,
+ action_select boolean NOT NULL,
+ action_update boolean NOT NULL
);
--
--- TOC entry 241 (class 1259 OID 16864)
+-- TOC entry 254 (class 1259 OID 17046)
-- Name: role; Type: TABLE; Schema: app; Owner: -
--
CREATE TABLE app.role (
- id uuid NOT NULL,
- module_id uuid NOT NULL,
- name character varying(64) NOT NULL,
- content text NOT NULL,
- assignable boolean NOT NULL
+ id uuid NOT NULL,
+ module_id uuid NOT NULL,
+ name character varying(64) NOT NULL,
+ content app.role_content NOT NULL,
+ assignable boolean NOT NULL
);
--
--- TOC entry 242 (class 1259 OID 16867)
+-- TOC entry 255 (class 1259 OID 17052)
-- Name: role_access; Type: TABLE; Schema: app; Owner: -
--
CREATE TABLE app.role_access (
- role_id uuid NOT NULL,
- relation_id uuid,
- attribute_id uuid,
- collection_id uuid,
- menu_id uuid,
- access smallint
+ role_id uuid NOT NULL,
+ relation_id uuid,
+ attribute_id uuid,
+ collection_id uuid,
+ menu_id uuid,
+ access smallint,
+ api_id uuid,
+ widget_id uuid,
+ client_event_id uuid
);
--
--- TOC entry 243 (class 1259 OID 16870)
+-- TOC entry 256 (class 1259 OID 17055)
-- Name: role_child; Type: TABLE; Schema: app; Owner: -
--
CREATE TABLE app.role_child (
- role_id uuid NOT NULL,
- role_id_child uuid NOT NULL
+ role_id uuid NOT NULL,
+ role_id_child uuid NOT NULL
);
--
--- TOC entry 244 (class 1259 OID 16873)
--- Name: config; Type: TABLE; Schema: instance; Owner: -
+-- TOC entry 287 (class 1259 OID 18403)
+-- Name: tab; Type: TABLE; Schema: app; Owner: -
--
-CREATE TABLE instance.config (
- name character varying(32) NOT NULL,
- value text NOT NULL
+CREATE TABLE app.tab (
+ id uuid NOT NULL,
+ field_id uuid NOT NULL,
+ "position" smallint NOT NULL,
+ state app.state_effect NOT NULL,
+ content_counter boolean NOT NULL
);
--
--- TOC entry 245 (class 1259 OID 16879)
--- Name: data_log; Type: TABLE; Schema: instance; Owner: -
+-- TOC entry 314 (class 1259 OID 19442)
+-- Name: variable; Type: TABLE; Schema: app; Owner: -
--
-CREATE TABLE instance.data_log (
- id uuid NOT NULL,
- relation_id uuid NOT NULL,
- login_id_wofk integer NOT NULL,
- record_id_wofk bigint NOT NULL,
- date_change bigint NOT NULL
+CREATE TABLE app.variable (
+ id uuid NOT NULL,
+ module_id uuid NOT NULL,
+ form_id uuid,
+ name character varying(64) NOT NULL,
+ comment text,
+ content app.attribute_content NOT NULL,
+ content_use app.attribute_content_use NOT NULL,
+ def text
);
--
--- TOC entry 246 (class 1259 OID 16882)
--- Name: data_log_value; Type: TABLE; Schema: instance; Owner: -
+-- TOC entry 300 (class 1259 OID 18940)
+-- Name: widget; Type: TABLE; Schema: app; Owner: -
--
-CREATE TABLE instance.data_log_value (
- data_log_id uuid NOT NULL,
- attribute_id uuid NOT NULL,
- attribute_id_nm uuid,
- outside_in boolean NOT NULL,
- value text
+CREATE TABLE app.widget (
+ id uuid NOT NULL,
+ module_id uuid NOT NULL,
+ form_id uuid,
+ name character varying(64) NOT NULL,
+ size smallint NOT NULL
);
--
--- TOC entry 247 (class 1259 OID 16888)
--- Name: ldap; Type: TABLE; Schema: instance; Owner: -
+-- TOC entry 306 (class 1259 OID 19157)
+-- Name: admin_mail; Type: TABLE; Schema: instance; Owner: -
--
-CREATE TABLE instance.ldap (
- id integer NOT NULL,
- name character varying(32) NOT NULL,
- host text NOT NULL,
- port integer NOT NULL,
- bind_user_dn text NOT NULL,
- bind_user_pw text NOT NULL,
- search_class text NOT NULL,
- search_dn text NOT NULL,
- login_attribute text NOT NULL,
- assign_roles boolean NOT NULL,
- starttls boolean NOT NULL,
- tls_verify boolean NOT NULL,
- key_attribute text NOT NULL,
- member_attribute text NOT NULL,
- ms_ad_ext boolean NOT NULL,
- tls boolean NOT NULL
-);
-
-
---
--- TOC entry 248 (class 1259 OID 16894)
--- Name: ldap_id_seq; Type: SEQUENCE; Schema: instance; Owner: -
---
+CREATE TABLE instance.admin_mail (
+ reason instance.admin_mail_reason NOT NULL,
+ days_before integer[] NOT NULL,
+ date_last_sent bigint NOT NULL
+);
-CREATE SEQUENCE instance.ldap_id_seq
- AS integer
- START WITH 1
- INCREMENT BY 1
- NO MINVALUE
- NO MAXVALUE
- CACHE 1;
+
+--
+-- TOC entry 305 (class 1259 OID 19067)
+-- Name: caption; Type: TABLE; Schema: instance; Owner: -
+--
+
+CREATE TABLE instance.caption (
+ module_id uuid,
+ attribute_id uuid,
+ form_id uuid,
+ field_id uuid,
+ column_id uuid,
+ role_id uuid,
+ menu_id uuid,
+ query_choice_id uuid,
+ pg_function_id uuid,
+ js_function_id uuid,
+ login_form_id uuid,
+ language_code character(5) NOT NULL,
+ content app.caption_content NOT NULL,
+ value text NOT NULL,
+ article_id uuid,
+ tab_id uuid,
+ widget_id uuid,
+ form_action_id uuid,
+ client_event_id uuid,
+ menu_tab_id uuid
+);
--
--- TOC entry 3908 (class 0 OID 0)
--- Dependencies: 248
--- Name: ldap_id_seq; Type: SEQUENCE OWNED BY; Schema: instance; Owner: -
+-- TOC entry 257 (class 1259 OID 17058)
+-- Name: config; Type: TABLE; Schema: instance; Owner: -
--
-ALTER SEQUENCE instance.ldap_id_seq OWNED BY instance.ldap.id;
+CREATE TABLE instance.config (
+ name character varying(32) NOT NULL,
+ value text NOT NULL
+);
--
--- TOC entry 249 (class 1259 OID 16896)
--- Name: ldap_role; Type: TABLE; Schema: instance; Owner: -
+-- TOC entry 258 (class 1259 OID 17064)
+-- Name: data_log; Type: TABLE; Schema: instance; Owner: -
--
-CREATE TABLE instance.ldap_role (
- ldap_id integer NOT NULL,
- role_id uuid NOT NULL,
- group_dn text NOT NULL
+CREATE TABLE instance.data_log (
+ id uuid NOT NULL,
+ relation_id uuid NOT NULL,
+ login_id_wofk integer NOT NULL,
+ record_id_wofk bigint NOT NULL,
+ date_change bigint NOT NULL
);
--
--- TOC entry 250 (class 1259 OID 16902)
--- Name: log; Type: TABLE; Schema: instance; Owner: -
+-- TOC entry 259 (class 1259 OID 17067)
+-- Name: data_log_value; Type: TABLE; Schema: instance; Owner: -
--
-CREATE TABLE instance.log (
- module_id uuid,
- node_id uuid,
- context instance.log_context NOT NULL,
- date_milli bigint NOT NULL,
- level smallint NOT NULL,
- message text NOT NULL
+CREATE TABLE instance.data_log_value (
+ data_log_id uuid NOT NULL,
+ attribute_id uuid NOT NULL,
+ attribute_id_nm uuid,
+ outside_in boolean NOT NULL,
+ value text
);
--
--- TOC entry 251 (class 1259 OID 16908)
--- Name: login; Type: TABLE; Schema: instance; Owner: -
+-- TOC entry 284 (class 1259 OID 18367)
+-- Name: file; Type: TABLE; Schema: instance; Owner: -
--
-CREATE TABLE instance.login (
- id integer NOT NULL,
- name character varying(128) NOT NULL,
- ldap_id integer,
- ldap_key text,
- salt character(32),
- hash character(64),
- salt_kdf text NOT NULL,
- key_private_enc text,
- key_private_enc_backup text,
- key_public text,
- no_auth boolean NOT NULL,
- admin boolean NOT NULL,
- active boolean NOT NULL
+CREATE TABLE instance.file (
+ id uuid NOT NULL,
+ ref_counter integer NOT NULL
);
--
--- TOC entry 252 (class 1259 OID 16914)
--- Name: login_id_seq; Type: SEQUENCE; Schema: instance; Owner: -
+-- TOC entry 285 (class 1259 OID 18372)
+-- Name: file_version; Type: TABLE; Schema: instance; Owner: -
--
-CREATE SEQUENCE instance.login_id_seq
- AS integer
- START WITH 1
- INCREMENT BY 1
- NO MINVALUE
- NO MAXVALUE
- CACHE 1;
+CREATE TABLE instance.file_version (
+ file_id uuid NOT NULL,
+ version integer NOT NULL,
+ login_id integer,
+ hash character(64),
+ size_kb integer NOT NULL,
+ date_change bigint NOT NULL
+);
--
--- TOC entry 3909 (class 0 OID 0)
--- Dependencies: 252
--- Name: login_id_seq; Type: SEQUENCE OWNED BY; Schema: instance; Owner: -
+-- TOC entry 260 (class 1259 OID 17073)
+-- Name: ldap; Type: TABLE; Schema: instance; Owner: -
--
-ALTER SEQUENCE instance.login_id_seq OWNED BY instance.login.id;
+CREATE TABLE instance.ldap (
+ id integer NOT NULL,
+ name character varying(32) NOT NULL,
+ host text NOT NULL,
+ port integer NOT NULL,
+ bind_user_dn text NOT NULL,
+ bind_user_pw text NOT NULL,
+ search_class text NOT NULL,
+ search_dn text NOT NULL,
+ login_attribute text NOT NULL,
+ assign_roles boolean NOT NULL,
+ starttls boolean NOT NULL,
+ tls_verify boolean NOT NULL,
+ key_attribute text NOT NULL,
+ member_attribute text NOT NULL,
+ ms_ad_ext boolean NOT NULL,
+ tls boolean NOT NULL,
+ login_template_id integer
+);
--
--- TOC entry 253 (class 1259 OID 16916)
--- Name: login_role; Type: TABLE; Schema: instance; Owner: -
+-- TOC entry 313 (class 1259 OID 19413)
+-- Name: ldap_attribute_login_meta; Type: TABLE; Schema: instance; Owner: -
--
-CREATE TABLE instance.login_role (
- login_id integer NOT NULL,
- role_id uuid NOT NULL
+CREATE TABLE instance.ldap_attribute_login_meta (
+ ldap_id integer NOT NULL,
+ department text,
+ email text,
+ location text,
+ name_display text,
+ name_fore text,
+ name_sur text,
+ notes text,
+ organization text,
+ phone_fax text,
+ phone_landline text,
+ phone_mobile text
);
--
--- TOC entry 254 (class 1259 OID 16919)
--- Name: login_setting; Type: TABLE; Schema: instance; Owner: -
+-- TOC entry 261 (class 1259 OID 17079)
+-- Name: ldap_id_seq; Type: SEQUENCE; Schema: instance; Owner: -
--
-CREATE TABLE instance.login_setting (
- login_id integer NOT NULL,
- borders_all boolean NOT NULL,
- borders_corner instance.login_setting_border_corner NOT NULL,
- compact boolean NOT NULL,
- dark boolean NOT NULL,
- date_format character(5) NOT NULL,
- header_captions boolean NOT NULL,
- hint_update_version integer NOT NULL,
- language_code character(5) NOT NULL,
- page_limit integer NOT NULL,
- menu_colored boolean NOT NULL,
- mobile_scroll_form boolean NOT NULL,
- font_family text NOT NULL,
- font_size smallint NOT NULL,
- pattern text,
- spacing integer NOT NULL,
- sunday_first_dow boolean NOT NULL,
- warn_unsaved boolean NOT NULL
-);
-
-
---
--- TOC entry 255 (class 1259 OID 16922)
--- Name: login_token_fixed; Type: TABLE; Schema: instance; Owner: -
+CREATE SEQUENCE instance.ldap_id_seq
+ AS integer
+ START WITH 1
+ INCREMENT BY 1
+ NO MINVALUE
+ NO MAXVALUE
+ CACHE 1;
+
+
+--
+-- TOC entry 4391 (class 0 OID 0)
+-- Dependencies: 261
+-- Name: ldap_id_seq; Type: SEQUENCE OWNED BY; Schema: instance; Owner: -
--
-CREATE TABLE instance.login_token_fixed (
- login_id integer NOT NULL,
- token character varying(48) NOT NULL,
- date_create bigint NOT NULL,
- context instance.token_fixed_context NOT NULL
-);
+ALTER SEQUENCE instance.ldap_id_seq OWNED BY instance.ldap.id;
--
--- TOC entry 256 (class 1259 OID 16925)
--- Name: mail_account; Type: TABLE; Schema: instance; Owner: -
+-- TOC entry 262 (class 1259 OID 17081)
+-- Name: ldap_role; Type: TABLE; Schema: instance; Owner: -
--
-CREATE TABLE instance.mail_account (
- id integer NOT NULL,
- name character varying(64) NOT NULL,
- mode instance.mail_account_mode NOT NULL,
- username text NOT NULL,
- password text NOT NULL,
- start_tls boolean NOT NULL,
- send_as text,
- host_name text NOT NULL,
- host_port integer NOT NULL
+CREATE TABLE instance.ldap_role (
+ ldap_id integer NOT NULL,
+ role_id uuid NOT NULL,
+ group_dn text NOT NULL
);
--
--- TOC entry 257 (class 1259 OID 16931)
--- Name: mail_account_id_seq; Type: SEQUENCE; Schema: instance; Owner: -
+-- TOC entry 263 (class 1259 OID 17087)
+-- Name: log; Type: TABLE; Schema: instance; Owner: -
--
-CREATE SEQUENCE instance.mail_account_id_seq
- AS integer
- START WITH 1
- INCREMENT BY 1
- NO MINVALUE
- NO MAXVALUE
- CACHE 1;
+CREATE TABLE instance.log (
+ module_id uuid,
+ node_id uuid,
+ context instance.log_context NOT NULL,
+ date_milli bigint NOT NULL,
+ level smallint NOT NULL,
+ message text NOT NULL
+);
--
--- TOC entry 3910 (class 0 OID 0)
--- Dependencies: 257
--- Name: mail_account_id_seq; Type: SEQUENCE OWNED BY; Schema: instance; Owner: -
+-- TOC entry 264 (class 1259 OID 17093)
+-- Name: login; Type: TABLE; Schema: instance; Owner: -
--
-ALTER SEQUENCE instance.mail_account_id_seq OWNED BY instance.mail_account.id;
+CREATE TABLE instance.login (
+ id integer NOT NULL,
+ name character varying(128) NOT NULL,
+ ldap_id integer,
+ ldap_key text,
+ salt character(32),
+ hash character(64),
+ salt_kdf text NOT NULL,
+ key_private_enc text,
+ key_private_enc_backup text,
+ key_public text,
+ no_auth boolean NOT NULL,
+ admin boolean NOT NULL,
+ active boolean NOT NULL,
+ token_expiry_hours integer,
+ limited boolean NOT NULL,
+ date_favorites bigint NOT NULL
+);
--
--- TOC entry 258 (class 1259 OID 16933)
--- Name: mail_spool; Type: TABLE; Schema: instance; Owner: -
+-- TOC entry 309 (class 1259 OID 19311)
+-- Name: login_client_event; Type: TABLE; Schema: instance; Owner: -
--
-CREATE TABLE instance.mail_spool (
- id integer NOT NULL,
- mail_account_id integer,
- attribute_id uuid,
- record_id_wofk bigint,
- from_list text DEFAULT ''::text NOT NULL,
- to_list text NOT NULL,
- cc_list text DEFAULT ''::text NOT NULL,
- bcc_list text DEFAULT ''::text NOT NULL,
- subject text NOT NULL,
- body text NOT NULL,
- attempt_count integer DEFAULT 0 NOT NULL,
- attempt_date bigint DEFAULT 0 NOT NULL,
- date bigint NOT NULL,
- outgoing boolean NOT NULL
+CREATE TABLE instance.login_client_event (
+ login_id integer NOT NULL,
+ client_event_id uuid NOT NULL,
+ hotkey_modifier1 app.client_event_hotkey_modifier NOT NULL,
+ hotkey_modifier2 app.client_event_hotkey_modifier,
+ hotkey_char character(1) NOT NULL
);
--
--- TOC entry 259 (class 1259 OID 16944)
--- Name: mail_spool_file; Type: TABLE; Schema: instance; Owner: -
+-- TOC entry 317 (class 1259 OID 19606)
+-- Name: login_favorite; Type: TABLE; Schema: instance; Owner: -
--
-CREATE TABLE instance.mail_spool_file (
- mail_id integer NOT NULL,
- "position" integer NOT NULL,
- file bytea NOT NULL,
- file_name text NOT NULL,
- file_size integer NOT NULL
+CREATE TABLE instance.login_favorite (
+ id uuid NOT NULL,
+ login_id integer NOT NULL,
+ module_id uuid NOT NULL,
+ form_id uuid NOT NULL,
+ record_id bigint,
+ title character varying(128),
+ "position" smallint NOT NULL
);
--
--- TOC entry 260 (class 1259 OID 16950)
--- Name: mail_spool_id_seq; Type: SEQUENCE; Schema: instance; Owner: -
+-- TOC entry 265 (class 1259 OID 17099)
+-- Name: login_id_seq; Type: SEQUENCE; Schema: instance; Owner: -
--
-CREATE SEQUENCE instance.mail_spool_id_seq
- AS integer
- START WITH 1
- INCREMENT BY 1
- NO MINVALUE
- NO MAXVALUE
- CACHE 1;
+CREATE SEQUENCE instance.login_id_seq
+ AS integer
+ START WITH 1
+ INCREMENT BY 1
+ NO MINVALUE
+ NO MAXVALUE
+ CACHE 1;
--
--- TOC entry 3911 (class 0 OID 0)
--- Dependencies: 260
--- Name: mail_spool_id_seq; Type: SEQUENCE OWNED BY; Schema: instance; Owner: -
+-- TOC entry 4392 (class 0 OID 0)
+-- Dependencies: 265
+-- Name: login_id_seq; Type: SEQUENCE OWNED BY; Schema: instance; Owner: -
--
-ALTER SEQUENCE instance.mail_spool_id_seq OWNED BY instance.mail_spool.id;
+ALTER SEQUENCE instance.login_id_seq OWNED BY instance.login.id;
--
--- TOC entry 261 (class 1259 OID 16952)
--- Name: module_option; Type: TABLE; Schema: instance; Owner: -
+-- TOC entry 311 (class 1259 OID 19390)
+-- Name: login_meta; Type: TABLE; Schema: instance; Owner: -
--
-CREATE TABLE instance.module_option (
- module_id uuid NOT NULL,
- hidden boolean NOT NULL,
- hash character(44) DEFAULT '00000000000000000000000000000000000000000000'::bpchar,
- "position" integer,
- owner boolean
+CREATE TABLE instance.login_meta (
+ login_id integer NOT NULL,
+ organization character varying(512),
+ location character varying(512),
+ department character varying(512),
+ email character varying(512),
+ phone_mobile character varying(512),
+ phone_landline character varying(512),
+ phone_fax character varying(512),
+ notes character varying(8196),
+ name_fore character varying(512),
+ name_sur character varying(512),
+ name_display character varying(512)
);
--
--- TOC entry 262 (class 1259 OID 16956)
--- Name: preset_record; Type: TABLE; Schema: instance; Owner: -
+-- TOC entry 318 (class 1259 OID 19629)
+-- Name: login_options; Type: TABLE; Schema: instance; Owner: -
--
-CREATE TABLE instance.preset_record (
- preset_id uuid NOT NULL,
- record_id_wofk bigint NOT NULL
+CREATE TABLE instance.login_options (
+ login_id integer NOT NULL,
+ login_favorite_id uuid,
+ field_id uuid NOT NULL,
+ is_mobile boolean NOT NULL,
+ date_change bigint NOT NULL,
+ options text NOT NULL
);
--
--- TOC entry 263 (class 1259 OID 16959)
--- Name: repo_module; Type: TABLE; Schema: instance; Owner: -
+-- TOC entry 266 (class 1259 OID 17101)
+-- Name: login_role; Type: TABLE; Schema: instance; Owner: -
--
-CREATE TABLE instance.repo_module (
- module_id_wofk uuid NOT NULL,
- name character varying(32) NOT NULL,
- author character varying(256) NOT NULL,
- release_build integer NOT NULL,
- release_build_app integer NOT NULL,
- release_date bigint NOT NULL,
- file uuid NOT NULL,
- in_store boolean,
- change_log text
+CREATE TABLE instance.login_role (
+ login_id integer NOT NULL,
+ role_id uuid NOT NULL
);
--
--- TOC entry 264 (class 1259 OID 16962)
--- Name: repo_module_meta; Type: TABLE; Schema: instance; Owner: -
+-- TOC entry 295 (class 1259 OID 18754)
+-- Name: login_search_dict; Type: TABLE; Schema: instance; Owner: -
--
-CREATE TABLE instance.repo_module_meta (
- module_id_wofk uuid NOT NULL,
- language_code character(5) NOT NULL,
- title character varying(256) NOT NULL,
- description character varying(512) NOT NULL,
- support_page text
+CREATE TABLE instance.login_search_dict (
+ login_id integer,
+ login_template_id integer,
+ "position" integer NOT NULL,
+ name regconfig NOT NULL
);
--
--- TOC entry 265 (class 1259 OID 16968)
--- Name: schedule; Type: TABLE; Schema: instance; Owner: -
+-- TOC entry 310 (class 1259 OID 19369)
+-- Name: login_session; Type: TABLE; Schema: instance; Owner: -
--
-CREATE TABLE instance.schedule (
- id integer NOT NULL,
- pg_function_schedule_id uuid,
- task_name character varying(32),
- date_attempt bigint NOT NULL,
- date_success bigint NOT NULL
+CREATE TABLE instance.login_session (
+ id uuid NOT NULL,
+ device instance.login_session_device NOT NULL,
+ login_id integer NOT NULL,
+ node_id uuid NOT NULL,
+ date bigint NOT NULL,
+ address text NOT NULL
);
--
--- TOC entry 281 (class 1259 OID 18501)
--- Name: schedule_id_seq; Type: SEQUENCE; Schema: instance; Owner: -
+-- TOC entry 267 (class 1259 OID 17104)
+-- Name: login_setting; Type: TABLE; Schema: instance; Owner: -
--
-CREATE SEQUENCE instance.schedule_id_seq
- AS integer
- START WITH 1
- INCREMENT BY 1
- NO MINVALUE
- NO MAXVALUE
- CACHE 1;
+CREATE TABLE instance.login_setting (
+ login_id integer,
+ dark boolean NOT NULL,
+ date_format character(5) NOT NULL,
+ header_captions boolean NOT NULL,
+ hint_update_version integer NOT NULL,
+ language_code character(5) NOT NULL,
+ mobile_scroll_form boolean NOT NULL,
+ font_family instance.login_setting_font_family NOT NULL,
+ font_size smallint NOT NULL,
+ pattern instance.login_setting_pattern,
+ spacing integer NOT NULL,
+ sunday_first_dow boolean NOT NULL,
+ warn_unsaved boolean NOT NULL,
+ tab_remember boolean NOT NULL,
+ login_template_id integer,
+ borders_squared boolean NOT NULL,
+ color_classic_mode boolean NOT NULL,
+ color_header character(6),
+ color_menu character(6),
+ color_header_single boolean NOT NULL,
+ header_modules boolean NOT NULL,
+ list_colored boolean NOT NULL,
+ list_spaced boolean NOT NULL,
+ number_sep_decimal character(1) NOT NULL,
+ number_sep_thousand character(1) NOT NULL,
+ bool_as_icon boolean NOT NULL,
+ form_actions_align text NOT NULL,
+ shadows_inputs boolean NOT NULL,
+ CONSTRAINT login_setting_single_parent CHECK ((1 = (
+CASE
+ WHEN (login_id IS NULL) THEN 0
+ ELSE 1
+END +
+CASE
+ WHEN (login_template_id IS NULL) THEN 0
+ ELSE 1
+END)))
+);
--
--- TOC entry 3912 (class 0 OID 0)
--- Dependencies: 281
--- Name: schedule_id_seq; Type: SEQUENCE OWNED BY; Schema: instance; Owner: -
+-- TOC entry 294 (class 1259 OID 18692)
+-- Name: login_template; Type: TABLE; Schema: instance; Owner: -
--
-ALTER SEQUENCE instance.schedule_id_seq OWNED BY instance.schedule.id;
+CREATE TABLE instance.login_template (
+ id integer NOT NULL,
+ name character varying(64) NOT NULL,
+ comment text
+);
--
--- TOC entry 266 (class 1259 OID 16971)
--- Name: task; Type: TABLE; Schema: instance; Owner: -
+-- TOC entry 293 (class 1259 OID 18690)
+-- Name: login_template_id_seq; Type: SEQUENCE; Schema: instance; Owner: -
+--
+
+CREATE SEQUENCE instance.login_template_id_seq
+ AS integer
+ START WITH 1
+ INCREMENT BY 1
+ NO MINVALUE
+ NO MAXVALUE
+ CACHE 1;
+
+
+--
+-- TOC entry 4393 (class 0 OID 0)
+-- Dependencies: 293
+-- Name: login_template_id_seq; Type: SEQUENCE OWNED BY; Schema: instance; Owner: -
+--
+
+ALTER SEQUENCE instance.login_template_id_seq OWNED BY instance.login_template.id;
+
+
+--
+-- TOC entry 268 (class 1259 OID 17110)
+-- Name: login_token_fixed; Type: TABLE; Schema: instance; Owner: -
+--
+
+CREATE TABLE instance.login_token_fixed (
+ login_id integer NOT NULL,
+ token character varying(48) NOT NULL,
+ date_create bigint NOT NULL,
+ context instance.token_fixed_context NOT NULL,
+ name character varying(64),
+ id integer NOT NULL
+);
+
+
+--
+-- TOC entry 291 (class 1259 OID 18551)
+-- Name: login_token_fixed_id_seq; Type: SEQUENCE; Schema: instance; Owner: -
+--
+
+CREATE SEQUENCE instance.login_token_fixed_id_seq
+ AS integer
+ START WITH 1
+ INCREMENT BY 1
+ NO MINVALUE
+ NO MAXVALUE
+ CACHE 1;
+
+
+--
+-- TOC entry 4394 (class 0 OID 0)
+-- Dependencies: 291
+-- Name: login_token_fixed_id_seq; Type: SEQUENCE OWNED BY; Schema: instance; Owner: -
+--
+
+ALTER SEQUENCE instance.login_token_fixed_id_seq OWNED BY instance.login_token_fixed.id;
+
+
+--
+-- TOC entry 301 (class 1259 OID 18977)
+-- Name: login_widget_group; Type: TABLE; Schema: instance; Owner: -
+--
+
+CREATE TABLE instance.login_widget_group (
+ id uuid DEFAULT gen_random_uuid() NOT NULL,
+ login_id integer NOT NULL,
+ title character varying(64) NOT NULL,
+ "position" smallint NOT NULL
+);
+
+
+--
+-- TOC entry 302 (class 1259 OID 18990)
+-- Name: login_widget_group_item; Type: TABLE; Schema: instance; Owner: -
+--
+
+CREATE TABLE instance.login_widget_group_item (
+ login_widget_group_id uuid NOT NULL,
+ "position" smallint NOT NULL,
+ widget_id uuid,
+ module_id uuid,
+ content instance.widget_content NOT NULL
+);
+
+
+--
+-- TOC entry 269 (class 1259 OID 17113)
+-- Name: mail_account; Type: TABLE; Schema: instance; Owner: -
+--
+
+CREATE TABLE instance.mail_account (
+ id integer NOT NULL,
+ name character varying(64) NOT NULL,
+ mode instance.mail_account_mode NOT NULL,
+ username text NOT NULL,
+ password text NOT NULL,
+ start_tls boolean NOT NULL,
+ send_as text,
+ host_name text NOT NULL,
+ host_port integer NOT NULL,
+ auth_method instance.mail_account_auth_method NOT NULL,
+ oauth_client_id integer,
+ comment text
+);
+
+
+--
+-- TOC entry 270 (class 1259 OID 17119)
+-- Name: mail_account_id_seq; Type: SEQUENCE; Schema: instance; Owner: -
+--
+
+CREATE SEQUENCE instance.mail_account_id_seq
+ AS integer
+ START WITH 1
+ INCREMENT BY 1
+ NO MINVALUE
+ NO MAXVALUE
+ CACHE 1;
+
+
+--
+-- TOC entry 4395 (class 0 OID 0)
+-- Dependencies: 270
+-- Name: mail_account_id_seq; Type: SEQUENCE OWNED BY; Schema: instance; Owner: -
+--
+
+ALTER SEQUENCE instance.mail_account_id_seq OWNED BY instance.mail_account.id;
+
+
+--
+-- TOC entry 271 (class 1259 OID 17121)
+-- Name: mail_spool; Type: TABLE; Schema: instance; Owner: -
+--
+
+CREATE TABLE instance.mail_spool (
+ id integer NOT NULL,
+ mail_account_id integer,
+ attribute_id uuid,
+ record_id_wofk bigint,
+ from_list text DEFAULT ''::text NOT NULL,
+ to_list text NOT NULL,
+ cc_list text DEFAULT ''::text NOT NULL,
+ bcc_list text DEFAULT ''::text NOT NULL,
+ subject text NOT NULL,
+ body text NOT NULL,
+ attempt_count integer DEFAULT 0 NOT NULL,
+ attempt_date bigint DEFAULT 0 NOT NULL,
+ date bigint NOT NULL,
+ outgoing boolean NOT NULL
+);
+
+
+--
+-- TOC entry 272 (class 1259 OID 17132)
+-- Name: mail_spool_file; Type: TABLE; Schema: instance; Owner: -
+--
+
+CREATE TABLE instance.mail_spool_file (
+ mail_id integer NOT NULL,
+ "position" integer NOT NULL,
+ file bytea NOT NULL,
+ file_name text NOT NULL,
+ file_size integer NOT NULL
+);
+
+
+--
+-- TOC entry 273 (class 1259 OID 17138)
+-- Name: mail_spool_id_seq; Type: SEQUENCE; Schema: instance; Owner: -
+--
+
+CREATE SEQUENCE instance.mail_spool_id_seq
+ AS integer
+ START WITH 1
+ INCREMENT BY 1
+ NO MINVALUE
+ NO MAXVALUE
+ CACHE 1;
+
+
+--
+-- TOC entry 4396 (class 0 OID 0)
+-- Dependencies: 273
+-- Name: mail_spool_id_seq; Type: SEQUENCE OWNED BY; Schema: instance; Owner: -
+--
+
+ALTER SEQUENCE instance.mail_spool_id_seq OWNED BY instance.mail_spool.id;
+
+
+--
+-- TOC entry 299 (class 1259 OID 18913)
+-- Name: mail_traffic; Type: TABLE; Schema: instance; Owner: -
+--
+
+CREATE TABLE instance.mail_traffic (
+ mail_account_id integer,
+ from_list text DEFAULT ''::text NOT NULL,
+ to_list text NOT NULL,
+ cc_list text DEFAULT ''::text NOT NULL,
+ bcc_list text DEFAULT ''::text NOT NULL,
+ subject text NOT NULL,
+ date bigint NOT NULL,
+ outgoing boolean NOT NULL,
+ files text[]
+);
+
+
+--
+-- TOC entry 274 (class 1259 OID 17140)
+-- Name: module_meta; Type: TABLE; Schema: instance; Owner: -
+--
+
+CREATE TABLE instance.module_meta (
+ module_id uuid NOT NULL,
+ hidden boolean NOT NULL,
+ "position" integer,
+ owner boolean,
+ hash character(44) NOT NULL,
+ date_change bigint DEFAULT date_part('epoch'::text, now()) NOT NULL,
+ languages_custom character(5)[]
+);
+
+
+--
+-- TOC entry 304 (class 1259 OID 19049)
+-- Name: oauth_client; Type: TABLE; Schema: instance; Owner: -
+--
+
+CREATE TABLE instance.oauth_client (
+ id integer NOT NULL,
+ name character varying(64) NOT NULL,
+ tenant text NOT NULL,
+ client_id text NOT NULL,
+ client_secret text NOT NULL,
+ date_expiry bigint NOT NULL,
+ scopes text[] NOT NULL,
+ token_url text NOT NULL
+);
+
+
+--
+-- TOC entry 303 (class 1259 OID 19047)
+-- Name: oauth_client_id_seq; Type: SEQUENCE; Schema: instance; Owner: -
+--
+
+CREATE SEQUENCE instance.oauth_client_id_seq
+ AS integer
+ START WITH 1
+ INCREMENT BY 1
+ NO MINVALUE
+ NO MAXVALUE
+ CACHE 1;
+
+
+--
+-- TOC entry 4397 (class 0 OID 0)
+-- Dependencies: 303
+-- Name: oauth_client_id_seq; Type: SEQUENCE OWNED BY; Schema: instance; Owner: -
+--
+
+ALTER SEQUENCE instance.oauth_client_id_seq OWNED BY instance.oauth_client.id;
+
+
+--
+-- TOC entry 275 (class 1259 OID 17144)
+-- Name: preset_record; Type: TABLE; Schema: instance; Owner: -
+--
+
+CREATE TABLE instance.preset_record (
+ preset_id uuid NOT NULL,
+ record_id_wofk bigint NOT NULL
+);
+
+
+--
+-- TOC entry 297 (class 1259 OID 18811)
+-- Name: pwa_domain; Type: TABLE; Schema: instance; Owner: -
+--
+
+CREATE TABLE instance.pwa_domain (
+ module_id uuid NOT NULL,
+ domain text NOT NULL
+);
+
+
+--
+-- TOC entry 276 (class 1259 OID 17147)
+-- Name: repo_module; Type: TABLE; Schema: instance; Owner: -
+--
+
+CREATE TABLE instance.repo_module (
+ module_id_wofk uuid NOT NULL,
+ name character varying(32) NOT NULL,
+ author character varying(256) NOT NULL,
+ release_build integer NOT NULL,
+ release_build_app integer NOT NULL,
+ release_date bigint NOT NULL,
+ file uuid NOT NULL,
+ in_store boolean,
+ change_log text
+);
+
+
+--
+-- TOC entry 277 (class 1259 OID 17153)
+-- Name: repo_module_meta; Type: TABLE; Schema: instance; Owner: -
+--
+
+CREATE TABLE instance.repo_module_meta (
+ module_id_wofk uuid NOT NULL,
+ language_code character(5) NOT NULL,
+ title character varying(256) NOT NULL,
+ description character varying(512) NOT NULL,
+ support_page text
+);
+
+
+--
+-- TOC entry 296 (class 1259 OID 18781)
+-- Name: rest_spool; Type: TABLE; Schema: instance; Owner: -
+--
+
+CREATE TABLE instance.rest_spool (
+ id uuid DEFAULT gen_random_uuid() NOT NULL,
+ pg_function_id_callback uuid,
+ method instance.rest_method NOT NULL,
+ headers jsonb,
+ url text NOT NULL,
+ body text,
+ callback_value text,
+ skip_verify boolean NOT NULL,
+ date_added bigint NOT NULL,
+ attempt_count integer DEFAULT 0 NOT NULL
+);
+
+
+--
+-- TOC entry 278 (class 1259 OID 17159)
+-- Name: schedule; Type: TABLE; Schema: instance; Owner: -
+--
+
+CREATE TABLE instance.schedule (
+ id integer NOT NULL,
+ pg_function_schedule_id uuid,
+ task_name character varying(32),
+ date_attempt bigint NOT NULL,
+ date_success bigint NOT NULL
+);
+
+
+--
+-- TOC entry 279 (class 1259 OID 17162)
+-- Name: schedule_id_seq; Type: SEQUENCE; Schema: instance; Owner: -
+--
+
+CREATE SEQUENCE instance.schedule_id_seq
+ AS integer
+ START WITH 1
+ INCREMENT BY 1
+ NO MINVALUE
+ NO MAXVALUE
+ CACHE 1;
+
+
+--
+-- TOC entry 4398 (class 0 OID 0)
+-- Dependencies: 279
+-- Name: schedule_id_seq; Type: SEQUENCE OWNED BY; Schema: instance; Owner: -
+--
+
+ALTER SEQUENCE instance.schedule_id_seq OWNED BY instance.schedule.id;
+
+
+--
+-- TOC entry 280 (class 1259 OID 17164)
+-- Name: task; Type: TABLE; Schema: instance; Owner: -
--
CREATE TABLE instance.task (
- name character varying(32) NOT NULL,
- cluster_master_only boolean NOT NULL,
- embedded_only boolean NOT NULL,
- interval_seconds integer NOT NULL,
- active boolean NOT NULL,
- active_only boolean NOT NULL
+ name character varying(32) NOT NULL,
+ cluster_master_only boolean NOT NULL,
+ embedded_only boolean NOT NULL,
+ interval_seconds integer NOT NULL,
+ active boolean NOT NULL,
+ active_only boolean NOT NULL
);
--
--- TOC entry 279 (class 1259 OID 18471)
+-- TOC entry 281 (class 1259 OID 17167)
-- Name: node; Type: TABLE; Schema: instance_cluster; Owner: -
--
CREATE TABLE instance_cluster.node (
- id uuid NOT NULL,
- name text NOT NULL,
- hostname text NOT NULL,
- cluster_master boolean NOT NULL,
- date_check_in bigint NOT NULL,
- date_started bigint NOT NULL,
- stat_sessions integer NOT NULL,
- stat_memory integer NOT NULL,
- running boolean NOT NULL
+ id uuid NOT NULL,
+ name text NOT NULL,
+ hostname text NOT NULL,
+ cluster_master boolean NOT NULL,
+ date_check_in bigint NOT NULL,
+ date_started bigint NOT NULL,
+ stat_memory integer NOT NULL,
+ running boolean NOT NULL
);
--
--- TOC entry 280 (class 1259 OID 18479)
+-- TOC entry 282 (class 1259 OID 17173)
-- Name: node_event; Type: TABLE; Schema: instance_cluster; Owner: -
--
CREATE TABLE instance_cluster.node_event (
- node_id uuid NOT NULL,
- content instance_cluster.node_event_content NOT NULL,
- payload text NOT NULL
+ node_id uuid NOT NULL,
+ content instance_cluster.node_event_content NOT NULL,
+ payload text NOT NULL,
+ target_address text,
+ target_device smallint,
+ target_login_id integer
);
--
--- TOC entry 282 (class 1259 OID 18511)
+-- TOC entry 283 (class 1259 OID 17179)
-- Name: node_schedule; Type: TABLE; Schema: instance_cluster; Owner: -
--
CREATE TABLE instance_cluster.node_schedule (
- node_id uuid NOT NULL,
- schedule_id integer NOT NULL,
- date_attempt bigint NOT NULL,
- date_success bigint NOT NULL
+ node_id uuid NOT NULL,
+ schedule_id integer NOT NULL,
+ date_attempt bigint NOT NULL,
+ date_success bigint NOT NULL
);
--
--- TOC entry 3297 (class 2604 OID 16974)
+-- TOC entry 3512 (class 2604 OID 17182)
-- Name: ldap id; Type: DEFAULT; Schema: instance; Owner: -
--
@@ -2599,7 +3995,7 @@ ALTER TABLE ONLY instance.ldap ALTER COLUMN id SET DEFAULT nextval('instance.lda
--
--- TOC entry 3298 (class 2604 OID 16975)
+-- TOC entry 3513 (class 2604 OID 17183)
-- Name: login id; Type: DEFAULT; Schema: instance; Owner: -
--
@@ -2607,7 +4003,23 @@ ALTER TABLE ONLY instance.login ALTER COLUMN id SET DEFAULT nextval('instance.lo
--
--- TOC entry 3299 (class 2604 OID 16976)
+-- TOC entry 3524 (class 2604 OID 18695)
+-- Name: login_template id; Type: DEFAULT; Schema: instance; Owner: -
+--
+
+ALTER TABLE ONLY instance.login_template ALTER COLUMN id SET DEFAULT nextval('instance.login_template_id_seq'::regclass);
+
+
+--
+-- TOC entry 3514 (class 2604 OID 18553)
+-- Name: login_token_fixed id; Type: DEFAULT; Schema: instance; Owner: -
+--
+
+ALTER TABLE ONLY instance.login_token_fixed ALTER COLUMN id SET DEFAULT nextval('instance.login_token_fixed_id_seq'::regclass);
+
+
+--
+-- TOC entry 3515 (class 2604 OID 17184)
-- Name: mail_account id; Type: DEFAULT; Schema: instance; Owner: -
--
@@ -2615,7 +4027,7 @@ ALTER TABLE ONLY instance.mail_account ALTER COLUMN id SET DEFAULT nextval('inst
--
--- TOC entry 3305 (class 2604 OID 16977)
+-- TOC entry 3516 (class 2604 OID 17185)
-- Name: mail_spool id; Type: DEFAULT; Schema: instance; Owner: -
--
@@ -2623,7 +4035,15 @@ ALTER TABLE ONLY instance.mail_spool ALTER COLUMN id SET DEFAULT nextval('instan
--
--- TOC entry 3307 (class 2604 OID 18503)
+-- TOC entry 3531 (class 2604 OID 19052)
+-- Name: oauth_client id; Type: DEFAULT; Schema: instance; Owner: -
+--
+
+ALTER TABLE ONLY instance.oauth_client ALTER COLUMN id SET DEFAULT nextval('instance.oauth_client_id_seq'::regclass);
+
+
+--
+-- TOC entry 3523 (class 2604 OID 17186)
-- Name: schedule id; Type: DEFAULT; Schema: instance; Owner: -
--
@@ -2631,646 +4051,937 @@ ALTER TABLE ONLY instance.schedule ALTER COLUMN id SET DEFAULT nextval('instance
--
--- TOC entry 3309 (class 2606 OID 16979)
+-- TOC entry 3892 (class 2606 OID 18660)
+-- Name: api api_name_version_key; Type: CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.api
+ ADD CONSTRAINT api_name_version_key UNIQUE (module_id, name, version) DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 3894 (class 2606 OID 18658)
+-- Name: api api_pkey; Type: CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.api
+ ADD CONSTRAINT api_pkey PRIMARY KEY (id);
+
+
+--
+-- TOC entry 3883 (class 2606 OID 18441)
+-- Name: article article_name_unique; Type: CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.article
+ ADD CONSTRAINT article_name_unique UNIQUE (module_id, name) DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 3885 (class 2606 OID 18439)
+-- Name: article article_pkey; Type: CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.article
+ ADD CONSTRAINT article_pkey PRIMARY KEY (id);
+
+
+--
+-- TOC entry 3536 (class 2606 OID 17188)
-- Name: attribute attribute_pkey; Type: CONSTRAINT; Schema: app; Owner: -
--
ALTER TABLE ONLY app.attribute
- ADD CONSTRAINT attribute_pkey PRIMARY KEY (id);
+ ADD CONSTRAINT attribute_pkey PRIMARY KEY (id);
+
+
+--
+-- TOC entry 3953 (class 2606 OID 19292)
+-- Name: client_event client_event_pkey; Type: CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.client_event
+ ADD CONSTRAINT client_event_pkey PRIMARY KEY (id);
--
--- TOC entry 3585 (class 2606 OID 18391)
+-- TOC entry 3563 (class 2606 OID 17190)
-- Name: collection_consumer collection_consumer_pkey; Type: CONSTRAINT; Schema: app; Owner: -
--
ALTER TABLE ONLY app.collection_consumer
- ADD CONSTRAINT collection_consumer_pkey PRIMARY KEY (id);
+ ADD CONSTRAINT collection_consumer_pkey PRIMARY KEY (id);
--
--- TOC entry 3581 (class 2606 OID 18186)
+-- TOC entry 3559 (class 2606 OID 17192)
-- Name: collection collection_pkey; Type: CONSTRAINT; Schema: app; Owner: -
--
ALTER TABLE ONLY app.collection
- ADD CONSTRAINT collection_pkey PRIMARY KEY (id);
+ ADD CONSTRAINT collection_pkey PRIMARY KEY (id);
--
--- TOC entry 3324 (class 2606 OID 16981)
+-- TOC entry 3570 (class 2606 OID 17194)
-- Name: column column_pkey; Type: CONSTRAINT; Schema: app; Owner: -
--
ALTER TABLE ONLY app."column"
- ADD CONSTRAINT column_pkey PRIMARY KEY (id);
+ ADD CONSTRAINT column_pkey PRIMARY KEY (id);
--
--- TOC entry 3336 (class 2606 OID 16983)
+-- TOC entry 3584 (class 2606 OID 17196)
-- Name: field_button field_button_pkey; Type: CONSTRAINT; Schema: app; Owner: -
--
ALTER TABLE ONLY app.field_button
- ADD CONSTRAINT field_button_pkey PRIMARY KEY (field_id);
+ ADD CONSTRAINT field_button_pkey PRIMARY KEY (field_id);
--
--- TOC entry 3339 (class 2606 OID 16985)
+-- TOC entry 3587 (class 2606 OID 17198)
-- Name: field_calendar field_calendar_pkey; Type: CONSTRAINT; Schema: app; Owner: -
--
ALTER TABLE ONLY app.field_calendar
- ADD CONSTRAINT field_calendar_pkey PRIMARY KEY (field_id);
+ ADD CONSTRAINT field_calendar_pkey PRIMARY KEY (field_id);
--
--- TOC entry 3547 (class 2606 OID 17938)
+-- TOC entry 3593 (class 2606 OID 17200)
-- Name: field_chart field_chart_pkey; Type: CONSTRAINT; Schema: app; Owner: -
--
ALTER TABLE ONLY app.field_chart
- ADD CONSTRAINT field_chart_pkey PRIMARY KEY (field_id);
+ ADD CONSTRAINT field_chart_pkey PRIMARY KEY (field_id);
--
--- TOC entry 3345 (class 2606 OID 16987)
+-- TOC entry 3595 (class 2606 OID 17202)
-- Name: field_container field_container_pkey; Type: CONSTRAINT; Schema: app; Owner: -
--
ALTER TABLE ONLY app.field_container
- ADD CONSTRAINT field_container_pkey PRIMARY KEY (field_id);
+ ADD CONSTRAINT field_container_pkey PRIMARY KEY (field_id);
--
--- TOC entry 3347 (class 2606 OID 16989)
+-- TOC entry 3597 (class 2606 OID 17204)
-- Name: field_data field_data_pkey; Type: CONSTRAINT; Schema: app; Owner: -
--
ALTER TABLE ONLY app.field_data
- ADD CONSTRAINT field_data_pkey PRIMARY KEY (field_id);
+ ADD CONSTRAINT field_data_pkey PRIMARY KEY (field_id);
--
--- TOC entry 3352 (class 2606 OID 16991)
+-- TOC entry 3602 (class 2606 OID 17206)
-- Name: field_data_relationship field_data_relationship_pkey; Type: CONSTRAINT; Schema: app; Owner: -
--
ALTER TABLE ONLY app.field_data_relationship
- ADD CONSTRAINT field_data_relationship_pkey PRIMARY KEY (field_id);
+ ADD CONSTRAINT field_data_relationship_pkey PRIMARY KEY (field_id);
--
--- TOC entry 3355 (class 2606 OID 16993)
+-- TOC entry 3605 (class 2606 OID 17208)
-- Name: field_data_relationship_preset field_data_relationship_preset_pkey; Type: CONSTRAINT; Schema: app; Owner: -
--
ALTER TABLE ONLY app.field_data_relationship_preset
- ADD CONSTRAINT field_data_relationship_preset_pkey PRIMARY KEY (field_id, preset_id) DEFERRABLE INITIALLY DEFERRED;
+ ADD CONSTRAINT field_data_relationship_preset_pkey PRIMARY KEY (field_id, preset_id) DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3359 (class 2606 OID 16996)
+-- TOC entry 3609 (class 2606 OID 17211)
-- Name: field_header field_header_pkey; Type: CONSTRAINT; Schema: app; Owner: -
--
ALTER TABLE ONLY app.field_header
- ADD CONSTRAINT field_header_pkey PRIMARY KEY (field_id);
+ ADD CONSTRAINT field_header_pkey PRIMARY KEY (field_id);
+
+
+--
+-- TOC entry 3910 (class 2606 OID 18879)
+-- Name: field_kanban field_kanban_pkey; Type: CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.field_kanban
+ ADD CONSTRAINT field_kanban_pkey PRIMARY KEY (field_id);
--
--- TOC entry 3361 (class 2606 OID 16998)
+-- TOC entry 3611 (class 2606 OID 17213)
-- Name: field_list field_list_pkey; Type: CONSTRAINT; Schema: app; Owner: -
--
ALTER TABLE ONLY app.field_list
- ADD CONSTRAINT field_list_pkey PRIMARY KEY (field_id);
+ ADD CONSTRAINT field_list_pkey PRIMARY KEY (field_id);
--
--- TOC entry 3330 (class 2606 OID 17000)
+-- TOC entry 3577 (class 2606 OID 17215)
-- Name: field field_pkey; Type: CONSTRAINT; Schema: app; Owner: -
--
ALTER TABLE ONLY app.field
- ADD CONSTRAINT field_pkey PRIMARY KEY (id);
+ ADD CONSTRAINT field_pkey PRIMARY KEY (id);
--
--- TOC entry 3579 (class 2606 OID 18169)
+-- TOC entry 3977 (class 2606 OID 19491)
+-- Name: field_variable field_variable_pkey; Type: CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.field_variable
+ ADD CONSTRAINT field_variable_pkey PRIMARY KEY (field_id);
+
+
+--
+-- TOC entry 3951 (class 2606 OID 19207)
+-- Name: form_action form_action_pkey; Type: CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.form_action
+ ADD CONSTRAINT form_action_pkey PRIMARY KEY (id);
+
+
+--
+-- TOC entry 3623 (class 2606 OID 17217)
-- Name: form_function form_function_pkey; Type: CONSTRAINT; Schema: app; Owner: -
--
ALTER TABLE ONLY app.form_function
- ADD CONSTRAINT form_function_pkey PRIMARY KEY (form_id, "position");
+ ADD CONSTRAINT form_function_pkey PRIMARY KEY (form_id, "position");
--
--- TOC entry 3366 (class 2606 OID 17002)
+-- TOC entry 3617 (class 2606 OID 17219)
-- Name: form form_name_unique; Type: CONSTRAINT; Schema: app; Owner: -
--
ALTER TABLE ONLY app.form
- ADD CONSTRAINT form_name_unique UNIQUE (module_id, name);
+ ADD CONSTRAINT form_name_unique UNIQUE (module_id, name);
--
--- TOC entry 3368 (class 2606 OID 17004)
+-- TOC entry 3619 (class 2606 OID 17221)
-- Name: form form_pkey; Type: CONSTRAINT; Schema: app; Owner: -
--
ALTER TABLE ONLY app.form
- ADD CONSTRAINT form_pkey PRIMARY KEY (id);
+ ADD CONSTRAINT form_pkey PRIMARY KEY (id);
--
--- TOC entry 3374 (class 2606 OID 17006)
+-- TOC entry 3629 (class 2606 OID 17223)
-- Name: form_state_condition form_state_condition_pkey; Type: CONSTRAINT; Schema: app; Owner: -
--
ALTER TABLE ONLY app.form_state_condition
- ADD CONSTRAINT form_state_condition_pkey PRIMARY KEY (form_state_id, "position");
+ ADD CONSTRAINT form_state_condition_pkey PRIMARY KEY (form_state_id, "position");
--
--- TOC entry 3597 (class 2606 OID 18268)
+-- TOC entry 3639 (class 2606 OID 17225)
-- Name: form_state_condition_side form_state_condition_side_pkey; Type: CONSTRAINT; Schema: app; Owner: -
--
ALTER TABLE ONLY app.form_state_condition_side
- ADD CONSTRAINT form_state_condition_side_pkey PRIMARY KEY (form_state_id, form_state_condition_position, side);
+ ADD CONSTRAINT form_state_condition_side_pkey PRIMARY KEY (form_state_id, form_state_condition_position, side);
--
--- TOC entry 3371 (class 2606 OID 17008)
+-- TOC entry 3626 (class 2606 OID 17227)
-- Name: form_state form_state_pkey; Type: CONSTRAINT; Schema: app; Owner: -
--
ALTER TABLE ONLY app.form_state
- ADD CONSTRAINT form_state_pkey PRIMARY KEY (id);
+ ADD CONSTRAINT form_state_pkey PRIMARY KEY (id);
--
--- TOC entry 3379 (class 2606 OID 17010)
+-- TOC entry 3646 (class 2606 OID 17229)
-- Name: icon icon_pkey; Type: CONSTRAINT; Schema: app; Owner: -
--
ALTER TABLE ONLY app.icon
- ADD CONSTRAINT icon_pkey PRIMARY KEY (id);
-
-
---
--- TOC entry 3566 (class 2606 OID 18084)
--- Name: js_function js_function_module_id_name_key; Type: CONSTRAINT; Schema: app; Owner: -
---
-
-ALTER TABLE ONLY app.js_function
- ADD CONSTRAINT js_function_module_id_name_key UNIQUE (module_id, name) DEFERRABLE INITIALLY DEFERRED;
+ ADD CONSTRAINT icon_pkey PRIMARY KEY (id);
--
--- TOC entry 3568 (class 2606 OID 18082)
+-- TOC entry 3652 (class 2606 OID 17234)
-- Name: js_function js_function_pkey; Type: CONSTRAINT; Schema: app; Owner: -
--
ALTER TABLE ONLY app.js_function
- ADD CONSTRAINT js_function_pkey PRIMARY KEY (id);
+ ADD CONSTRAINT js_function_pkey PRIMARY KEY (id);
--
--- TOC entry 3543 (class 2606 OID 17898)
+-- TOC entry 3663 (class 2606 OID 17236)
-- Name: login_form login_form_name_unique; Type: CONSTRAINT; Schema: app; Owner: -
--
ALTER TABLE ONLY app.login_form
- ADD CONSTRAINT login_form_name_unique UNIQUE (module_id, name) DEFERRABLE INITIALLY DEFERRED;
+ ADD CONSTRAINT login_form_name_unique UNIQUE (module_id, name) DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3545 (class 2606 OID 17896)
+-- TOC entry 3665 (class 2606 OID 17239)
-- Name: login_form login_form_pkey; Type: CONSTRAINT; Schema: app; Owner: -
--
ALTER TABLE ONLY app.login_form
- ADD CONSTRAINT login_form_pkey PRIMARY KEY (id);
+ ADD CONSTRAINT login_form_pkey PRIMARY KEY (id);
--
--- TOC entry 3386 (class 2606 OID 17012)
+-- TOC entry 3672 (class 2606 OID 17241)
-- Name: menu menu_pkey; Type: CONSTRAINT; Schema: app; Owner: -
--
ALTER TABLE ONLY app.menu
- ADD CONSTRAINT menu_pkey PRIMARY KEY (id);
+ ADD CONSTRAINT menu_pkey PRIMARY KEY (id);
--
--- TOC entry 3397 (class 2606 OID 17014)
+-- TOC entry 3983 (class 2606 OID 19567)
+-- Name: menu_tab menu_tab_pkey; Type: CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.menu_tab
+ ADD CONSTRAINT menu_tab_pkey PRIMARY KEY (id);
+
+
+--
+-- TOC entry 3687 (class 2606 OID 17243)
-- Name: module_language module_language_pkey; Type: CONSTRAINT; Schema: app; Owner: -
--
ALTER TABLE ONLY app.module_language
- ADD CONSTRAINT module_language_pkey PRIMARY KEY (module_id, language_code);
+ ADD CONSTRAINT module_language_pkey PRIMARY KEY (module_id, language_code);
--
--- TOC entry 3391 (class 2606 OID 17016)
+-- TOC entry 3681 (class 2606 OID 17245)
-- Name: module module_pkey; Type: CONSTRAINT; Schema: app; Owner: -
--
ALTER TABLE ONLY app.module
- ADD CONSTRAINT module_pkey PRIMARY KEY (id);
+ ADD CONSTRAINT module_pkey PRIMARY KEY (id);
--
--- TOC entry 3558 (class 2606 OID 18018)
+-- TOC entry 3692 (class 2606 OID 17247)
-- Name: module_start_form module_start_form_pkey; Type: CONSTRAINT; Schema: app; Owner: -
--
ALTER TABLE ONLY app.module_start_form
- ADD CONSTRAINT module_start_form_pkey PRIMARY KEY (module_id, "position");
+ ADD CONSTRAINT module_start_form_pkey PRIMARY KEY (module_id, "position");
--
--- TOC entry 3393 (class 2606 OID 17018)
+-- TOC entry 3683 (class 2606 OID 18547)
-- Name: module module_unique_name; Type: CONSTRAINT; Schema: app; Owner: -
--
ALTER TABLE ONLY app.module
- ADD CONSTRAINT module_unique_name UNIQUE (name);
+ ADD CONSTRAINT module_unique_name UNIQUE (name);
--
--- TOC entry 3400 (class 2606 OID 17020)
+-- TOC entry 3699 (class 2606 OID 18549)
-- Name: pg_function pg_function_name_unique; Type: CONSTRAINT; Schema: app; Owner: -
--
ALTER TABLE ONLY app.pg_function
- ADD CONSTRAINT pg_function_name_unique UNIQUE (module_id, name);
+ ADD CONSTRAINT pg_function_name_unique UNIQUE (module_id, name);
--
--- TOC entry 3402 (class 2606 OID 17022)
+-- TOC entry 3701 (class 2606 OID 17253)
-- Name: pg_function pg_function_pkey; Type: CONSTRAINT; Schema: app; Owner: -
--
ALTER TABLE ONLY app.pg_function
- ADD CONSTRAINT pg_function_pkey PRIMARY KEY (id);
+ ADD CONSTRAINT pg_function_pkey PRIMARY KEY (id);
--
--- TOC entry 3410 (class 2606 OID 17850)
+-- TOC entry 3709 (class 2606 OID 17255)
-- Name: pg_function_schedule pg_function_schedule_pkey; Type: CONSTRAINT; Schema: app; Owner: -
--
ALTER TABLE ONLY app.pg_function_schedule
- ADD CONSTRAINT pg_function_schedule_pkey PRIMARY KEY (id);
+ ADD CONSTRAINT pg_function_schedule_pkey PRIMARY KEY (id);
--
--- TOC entry 3413 (class 2606 OID 17026)
+-- TOC entry 3713 (class 2606 OID 17257)
-- Name: pg_index pg_index_pkey; Type: CONSTRAINT; Schema: app; Owner: -
--
ALTER TABLE ONLY app.pg_index
- ADD CONSTRAINT pg_index_pkey PRIMARY KEY (id);
+ ADD CONSTRAINT pg_index_pkey PRIMARY KEY (id);
--
--- TOC entry 3419 (class 2606 OID 17028)
+-- TOC entry 3720 (class 2606 OID 17259)
-- Name: pg_trigger pg_trigger_pkey; Type: CONSTRAINT; Schema: app; Owner: -
--
ALTER TABLE ONLY app.pg_trigger
- ADD CONSTRAINT pg_trigger_pkey PRIMARY KEY (id);
+ ADD CONSTRAINT pg_trigger_pkey PRIMARY KEY (id);
--
--- TOC entry 3553 (class 2606 OID 17984)
+-- TOC entry 3779 (class 2606 OID 17261)
-- Name: relation_policy policy_pkey; Type: CONSTRAINT; Schema: app; Owner: -
--
ALTER TABLE ONLY app.relation_policy
- ADD CONSTRAINT policy_pkey PRIMARY KEY (relation_id, "position");
+ ADD CONSTRAINT policy_pkey PRIMARY KEY (relation_id, "position");
--
--- TOC entry 3422 (class 2606 OID 17030)
+-- TOC entry 3723 (class 2606 OID 18545)
-- Name: preset preset_name_unique; Type: CONSTRAINT; Schema: app; Owner: -
--
ALTER TABLE ONLY app.preset
- ADD CONSTRAINT preset_name_unique UNIQUE (relation_id, name);
+ ADD CONSTRAINT preset_name_unique UNIQUE (relation_id, name);
--
--- TOC entry 3424 (class 2606 OID 17032)
+-- TOC entry 3725 (class 2606 OID 17265)
-- Name: preset preset_pkey; Type: CONSTRAINT; Schema: app; Owner: -
--
ALTER TABLE ONLY app.preset
- ADD CONSTRAINT preset_pkey PRIMARY KEY (id);
+ ADD CONSTRAINT preset_pkey PRIMARY KEY (id);
--
--- TOC entry 3429 (class 2606 OID 17034)
+-- TOC entry 3730 (class 2606 OID 17267)
-- Name: preset_value preset_value_pkey; Type: CONSTRAINT; Schema: app; Owner: -
--
ALTER TABLE ONLY app.preset_value
- ADD CONSTRAINT preset_value_pkey PRIMARY KEY (id);
+ ADD CONSTRAINT preset_value_pkey PRIMARY KEY (id);
--
--- TOC entry 3438 (class 2606 OID 17036)
+-- TOC entry 3740 (class 2606 OID 17269)
-- Name: query_choice query_choice_pkey; Type: CONSTRAINT; Schema: app; Owner: -
--
ALTER TABLE ONLY app.query_choice
- ADD CONSTRAINT query_choice_pkey PRIMARY KEY (id);
+ ADD CONSTRAINT query_choice_pkey PRIMARY KEY (id);
--
--- TOC entry 3440 (class 2606 OID 17038)
+-- TOC entry 3742 (class 2606 OID 17271)
-- Name: query_choice query_choice_query_id_name_key; Type: CONSTRAINT; Schema: app; Owner: -
--
ALTER TABLE ONLY app.query_choice
- ADD CONSTRAINT query_choice_query_id_name_key UNIQUE (query_id, name) DEFERRABLE INITIALLY DEFERRED;
+ ADD CONSTRAINT query_choice_query_id_name_key UNIQUE (query_id, name) DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3445 (class 2606 OID 17041)
+-- TOC entry 3747 (class 2606 OID 19525)
-- Name: query_filter query_filter_pkey; Type: CONSTRAINT; Schema: app; Owner: -
--
ALTER TABLE ONLY app.query_filter
- ADD CONSTRAINT query_filter_pkey PRIMARY KEY (query_id, "position");
+ ADD CONSTRAINT query_filter_pkey PRIMARY KEY (query_id, index, "position");
--
--- TOC entry 3453 (class 2606 OID 17043)
+-- TOC entry 3758 (class 2606 OID 19522)
-- Name: query_filter_side query_filter_side_pkey; Type: CONSTRAINT; Schema: app; Owner: -
--
ALTER TABLE ONLY app.query_filter_side
- ADD CONSTRAINT query_filter_side_pkey PRIMARY KEY (query_id, query_filter_position, side);
+ ADD CONSTRAINT query_filter_side_pkey PRIMARY KEY (query_id, query_filter_index, query_filter_position, side);
--
--- TOC entry 3459 (class 2606 OID 17045)
+-- TOC entry 3764 (class 2606 OID 17278)
-- Name: query_join query_join_pkey; Type: CONSTRAINT; Schema: app; Owner: -
--
ALTER TABLE ONLY app.query_join
- ADD CONSTRAINT query_join_pkey PRIMARY KEY (query_id, "position");
+ ADD CONSTRAINT query_join_pkey PRIMARY KEY (query_id, "position");
--
--- TOC entry 3465 (class 2606 OID 17047)
+-- TOC entry 3770 (class 2606 OID 17280)
-- Name: query_order query_order_pkey; Type: CONSTRAINT; Schema: app; Owner: -
--
ALTER TABLE ONLY app.query_order
- ADD CONSTRAINT query_order_pkey PRIMARY KEY (query_id, "position");
+ ADD CONSTRAINT query_order_pkey PRIMARY KEY (query_id, "position");
--
--- TOC entry 3435 (class 2606 OID 17049)
+-- TOC entry 3737 (class 2606 OID 17282)
-- Name: query query_pkey; Type: CONSTRAINT; Schema: app; Owner: -
--
ALTER TABLE ONLY app.query
- ADD CONSTRAINT query_pkey PRIMARY KEY (id);
+ ADD CONSTRAINT query_pkey PRIMARY KEY (id);
--
--- TOC entry 3468 (class 2606 OID 17051)
+-- TOC entry 3773 (class 2606 OID 17284)
-- Name: relation relation_pkey; Type: CONSTRAINT; Schema: app; Owner: -
--
ALTER TABLE ONLY app.relation
- ADD CONSTRAINT relation_pkey PRIMARY KEY (id);
+ ADD CONSTRAINT relation_pkey PRIMARY KEY (id);
--
--- TOC entry 3481 (class 2606 OID 17053)
+-- TOC entry 3795 (class 2606 OID 17286)
-- Name: role_child role_child_pkey; Type: CONSTRAINT; Schema: app; Owner: -
--
ALTER TABLE ONLY app.role_child
- ADD CONSTRAINT role_child_pkey PRIMARY KEY (role_id, role_id_child);
+ ADD CONSTRAINT role_child_pkey PRIMARY KEY (role_id, role_id_child);
--
--- TOC entry 3470 (class 2606 OID 17055)
+-- TOC entry 3781 (class 2606 OID 17288)
-- Name: role role_name_module_id_key; Type: CONSTRAINT; Schema: app; Owner: -
--
ALTER TABLE ONLY app.role
- ADD CONSTRAINT role_name_module_id_key UNIQUE (name, module_id) DEFERRABLE INITIALLY DEFERRED;
+ ADD CONSTRAINT role_name_module_id_key UNIQUE (name, module_id) DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3472 (class 2606 OID 17058)
+-- TOC entry 3783 (class 2606 OID 17291)
-- Name: role role_pkey; Type: CONSTRAINT; Schema: app; Owner: -
--
ALTER TABLE ONLY app.role
- ADD CONSTRAINT role_pkey PRIMARY KEY (id);
+ ADD CONSTRAINT role_pkey PRIMARY KEY (id);
--
--- TOC entry 3483 (class 2606 OID 17060)
--- Name: config config_pkey; Type: CONSTRAINT; Schema: instance; Owner: -
+-- TOC entry 3879 (class 2606 OID 18409)
+-- Name: tab tab_field_id_position_key; Type: CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY instance.config
- ADD CONSTRAINT config_pkey PRIMARY KEY (name);
+ALTER TABLE ONLY app.tab
+ ADD CONSTRAINT tab_field_id_position_key UNIQUE (field_id, "position") DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3485 (class 2606 OID 17062)
--- Name: data_log data_log_pkey; Type: CONSTRAINT; Schema: instance; Owner: -
+-- TOC entry 3881 (class 2606 OID 18407)
+-- Name: tab tab_pkey; Type: CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY instance.data_log
- ADD CONSTRAINT data_log_pkey PRIMARY KEY (id);
+ALTER TABLE ONLY app.tab
+ ADD CONSTRAINT tab_pkey PRIMARY KEY (id);
--
--- TOC entry 3492 (class 2606 OID 17064)
--- Name: ldap ldap_name_key; Type: CONSTRAINT; Schema: instance; Owner: -
+-- TOC entry 3975 (class 2606 OID 19449)
+-- Name: variable variable_pkey; Type: CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY instance.ldap
- ADD CONSTRAINT ldap_name_key UNIQUE (name);
+ALTER TABLE ONLY app.variable
+ ADD CONSTRAINT variable_pkey PRIMARY KEY (id);
--
--- TOC entry 3494 (class 2606 OID 17066)
--- Name: ldap ldap_pkey; Type: CONSTRAINT; Schema: instance; Owner: -
+-- TOC entry 3917 (class 2606 OID 18944)
+-- Name: widget widget_pkey; Type: CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY instance.ldap
- ADD CONSTRAINT ldap_pkey PRIMARY KEY (id);
+ALTER TABLE ONLY app.widget
+ ADD CONSTRAINT widget_pkey PRIMARY KEY (id);
--
--- TOC entry 3502 (class 2606 OID 17068)
--- Name: login login_name_key; Type: CONSTRAINT; Schema: instance; Owner: -
+-- TOC entry 3797 (class 2606 OID 17293)
+-- Name: config config_pkey; Type: CONSTRAINT; Schema: instance; Owner: -
--
-ALTER TABLE ONLY instance.login
- ADD CONSTRAINT login_name_key UNIQUE (name);
+ALTER TABLE ONLY instance.config
+ ADD CONSTRAINT config_pkey PRIMARY KEY (name);
+
+
+--
+-- TOC entry 3799 (class 2606 OID 17295)
+-- Name: data_log data_log_pkey; Type: CONSTRAINT; Schema: instance; Owner: -
+--
+
+ALTER TABLE ONLY instance.data_log
+ ADD CONSTRAINT data_log_pkey PRIMARY KEY (id);
+
+
+--
+-- TOC entry 3870 (class 2606 OID 18371)
+-- Name: file file_pkey; Type: CONSTRAINT; Schema: instance; Owner: -
+--
+
+ALTER TABLE ONLY instance.file
+ ADD CONSTRAINT file_pkey PRIMARY KEY (id);
+
+
+--
+-- TOC entry 3873 (class 2606 OID 18376)
+-- Name: file_version file_version_pkey; Type: CONSTRAINT; Schema: instance; Owner: -
+--
+
+ALTER TABLE ONLY instance.file_version
+ ADD CONSTRAINT file_version_pkey PRIMARY KEY (file_id, version);
+
+
+--
+-- TOC entry 3969 (class 2606 OID 19420)
+-- Name: ldap_attribute_login_meta ldap_attribute_login_meta_pkey; Type: CONSTRAINT; Schema: instance; Owner: -
+--
+
+ALTER TABLE ONLY instance.ldap_attribute_login_meta
+ ADD CONSTRAINT ldap_attribute_login_meta_pkey PRIMARY KEY (ldap_id);
+
+
+--
+-- TOC entry 3807 (class 2606 OID 17297)
+-- Name: ldap ldap_name_key; Type: CONSTRAINT; Schema: instance; Owner: -
+--
+
+ALTER TABLE ONLY instance.ldap
+ ADD CONSTRAINT ldap_name_key UNIQUE (name);
+
+
+--
+-- TOC entry 3809 (class 2606 OID 17299)
+-- Name: ldap ldap_pkey; Type: CONSTRAINT; Schema: instance; Owner: -
+--
+
+ALTER TABLE ONLY instance.ldap
+ ADD CONSTRAINT ldap_pkey PRIMARY KEY (id);
+
+
+--
+-- TOC entry 3960 (class 2606 OID 19315)
+-- Name: login_client_event login_client_event_pkey; Type: CONSTRAINT; Schema: instance; Owner: -
+--
+
+ALTER TABLE ONLY instance.login_client_event
+ ADD CONSTRAINT login_client_event_pkey PRIMARY KEY (login_id, client_event_id);
+
+
+--
+-- TOC entry 3988 (class 2606 OID 19610)
+-- Name: login_favorite login_favorite_pkey; Type: CONSTRAINT; Schema: instance; Owner: -
+--
+
+ALTER TABLE ONLY instance.login_favorite
+ ADD CONSTRAINT login_favorite_pkey PRIMARY KEY (id);
+
+
+--
+-- TOC entry 3967 (class 2606 OID 19397)
+-- Name: login_meta login_meta_pkey; Type: CONSTRAINT; Schema: instance; Owner: -
+--
+
+ALTER TABLE ONLY instance.login_meta
+ ADD CONSTRAINT login_meta_pkey PRIMARY KEY (login_id);
+
+
+--
+-- TOC entry 3817 (class 2606 OID 17301)
+-- Name: login login_name_key; Type: CONSTRAINT; Schema: instance; Owner: -
+--
+
+ALTER TABLE ONLY instance.login
+ ADD CONSTRAINT login_name_key UNIQUE (name);
--
--- TOC entry 3504 (class 2606 OID 17070)
+-- TOC entry 3819 (class 2606 OID 17303)
-- Name: login login_pkey; Type: CONSTRAINT; Schema: instance; Owner: -
--
ALTER TABLE ONLY instance.login
- ADD CONSTRAINT login_pkey PRIMARY KEY (id);
+ ADD CONSTRAINT login_pkey PRIMARY KEY (id);
--
--- TOC entry 3508 (class 2606 OID 17072)
+-- TOC entry 3823 (class 2606 OID 17305)
-- Name: login_role login_role_pkey; Type: CONSTRAINT; Schema: instance; Owner: -
--
ALTER TABLE ONLY instance.login_role
- ADD CONSTRAINT login_role_pkey PRIMARY KEY (login_id, role_id);
+ ADD CONSTRAINT login_role_pkey PRIMARY KEY (login_id, role_id);
+
+
+--
+-- TOC entry 3965 (class 2606 OID 19376)
+-- Name: login_session login_session_pkey; Type: CONSTRAINT; Schema: instance; Owner: -
+--
+
+ALTER TABLE ONLY instance.login_session
+ ADD CONSTRAINT login_session_pkey PRIMARY KEY (id);
--
--- TOC entry 3511 (class 2606 OID 17074)
--- Name: login_setting login_setting_pkey; Type: CONSTRAINT; Schema: instance; Owner: -
+-- TOC entry 3828 (class 2606 OID 18710)
+-- Name: login_setting login_setting_login_id_unique; Type: CONSTRAINT; Schema: instance; Owner: -
--
ALTER TABLE ONLY instance.login_setting
- ADD CONSTRAINT login_setting_pkey PRIMARY KEY (login_id);
+ ADD CONSTRAINT login_setting_login_id_unique UNIQUE (login_id) DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3513 (class 2606 OID 17076)
+-- TOC entry 3830 (class 2606 OID 18713)
+-- Name: login_setting login_setting_login_template_id_unique; Type: CONSTRAINT; Schema: instance; Owner: -
+--
+
+ALTER TABLE ONLY instance.login_setting
+ ADD CONSTRAINT login_setting_login_template_id_unique UNIQUE (login_template_id) DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 3897 (class 2606 OID 18702)
+-- Name: login_template login_template_name_unique; Type: CONSTRAINT; Schema: instance; Owner: -
+--
+
+ALTER TABLE ONLY instance.login_template
+ ADD CONSTRAINT login_template_name_unique UNIQUE (name) DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 3899 (class 2606 OID 18700)
+-- Name: login_template login_template_pkey; Type: CONSTRAINT; Schema: instance; Owner: -
+--
+
+ALTER TABLE ONLY instance.login_template
+ ADD CONSTRAINT login_template_pkey PRIMARY KEY (id);
+
+
+--
+-- TOC entry 3832 (class 2606 OID 18555)
-- Name: login_token_fixed login_token_fixed_pkey; Type: CONSTRAINT; Schema: instance; Owner: -
--
ALTER TABLE ONLY instance.login_token_fixed
- ADD CONSTRAINT login_token_fixed_pkey PRIMARY KEY (login_id, token);
+ ADD CONSTRAINT login_token_fixed_pkey PRIMARY KEY (id);
--
--- TOC entry 3517 (class 2606 OID 17078)
+-- TOC entry 3927 (class 2606 OID 18997)
+-- Name: login_widget_group_item login_widget_group_item_pkey; Type: CONSTRAINT; Schema: instance; Owner: -
+--
+
+ALTER TABLE ONLY instance.login_widget_group_item
+ ADD CONSTRAINT login_widget_group_item_pkey PRIMARY KEY (login_widget_group_id, "position");
+
+
+--
+-- TOC entry 3921 (class 2606 OID 18982)
+-- Name: login_widget_group login_widget_group_pkey; Type: CONSTRAINT; Schema: instance; Owner: -
+--
+
+ALTER TABLE ONLY instance.login_widget_group
+ ADD CONSTRAINT login_widget_group_pkey PRIMARY KEY (id);
+
+
+--
+-- TOC entry 3837 (class 2606 OID 17311)
-- Name: mail_account mail_account_pkey; Type: CONSTRAINT; Schema: instance; Owner: -
--
ALTER TABLE ONLY instance.mail_account
- ADD CONSTRAINT mail_account_pkey PRIMARY KEY (id);
+ ADD CONSTRAINT mail_account_pkey PRIMARY KEY (id);
--
--- TOC entry 3525 (class 2606 OID 17080)
+-- TOC entry 3846 (class 2606 OID 17313)
-- Name: mail_spool_file mail_spool_file_pkey; Type: CONSTRAINT; Schema: instance; Owner: -
--
ALTER TABLE ONLY instance.mail_spool_file
- ADD CONSTRAINT mail_spool_file_pkey PRIMARY KEY (mail_id, "position");
+ ADD CONSTRAINT mail_spool_file_pkey PRIMARY KEY (mail_id, "position");
--
--- TOC entry 3523 (class 2606 OID 17082)
+-- TOC entry 3844 (class 2606 OID 17315)
-- Name: mail_spool mail_spool_pkey; Type: CONSTRAINT; Schema: instance; Owner: -
--
ALTER TABLE ONLY instance.mail_spool
- ADD CONSTRAINT mail_spool_pkey PRIMARY KEY (id);
+ ADD CONSTRAINT mail_spool_pkey PRIMARY KEY (id);
+
+
+--
+-- TOC entry 3848 (class 2606 OID 17317)
+-- Name: module_meta module_option_pkey; Type: CONSTRAINT; Schema: instance; Owner: -
+--
+
+ALTER TABLE ONLY instance.module_meta
+ ADD CONSTRAINT module_option_pkey PRIMARY KEY (module_id);
--
--- TOC entry 3527 (class 2606 OID 17084)
--- Name: module_option module_option_pkey; Type: CONSTRAINT; Schema: instance; Owner: -
+-- TOC entry 3929 (class 2606 OID 19057)
+-- Name: oauth_client oauth_clienty_pkey; Type: CONSTRAINT; Schema: instance; Owner: -
--
-ALTER TABLE ONLY instance.module_option
- ADD CONSTRAINT module_option_pkey PRIMARY KEY (module_id);
+ALTER TABLE ONLY instance.oauth_client
+ ADD CONSTRAINT oauth_clienty_pkey PRIMARY KEY (id);
--
--- TOC entry 3529 (class 2606 OID 17086)
+-- TOC entry 3850 (class 2606 OID 17319)
-- Name: preset_record preset_record_pkey; Type: CONSTRAINT; Schema: instance; Owner: -
--
ALTER TABLE ONLY instance.preset_record
- ADD CONSTRAINT preset_record_pkey PRIMARY KEY (preset_id);
+ ADD CONSTRAINT preset_record_pkey PRIMARY KEY (preset_id);
+
+
+--
+-- TOC entry 3908 (class 2606 OID 18818)
+-- Name: pwa_domain pwa_domain_pkey; Type: CONSTRAINT; Schema: instance; Owner: -
+--
+
+ALTER TABLE ONLY instance.pwa_domain
+ ADD CONSTRAINT pwa_domain_pkey PRIMARY KEY (module_id);
--
--- TOC entry 3531 (class 2606 OID 17088)
+-- TOC entry 3852 (class 2606 OID 17321)
-- Name: repo_module repo_module_name_key; Type: CONSTRAINT; Schema: instance; Owner: -
--
ALTER TABLE ONLY instance.repo_module
- ADD CONSTRAINT repo_module_name_key UNIQUE (name);
+ ADD CONSTRAINT repo_module_name_key UNIQUE (name);
--
--- TOC entry 3533 (class 2606 OID 17090)
+-- TOC entry 3854 (class 2606 OID 17323)
-- Name: repo_module repo_module_pkey; Type: CONSTRAINT; Schema: instance; Owner: -
--
ALTER TABLE ONLY instance.repo_module
- ADD CONSTRAINT repo_module_pkey PRIMARY KEY (module_id_wofk);
+ ADD CONSTRAINT repo_module_pkey PRIMARY KEY (module_id_wofk);
+
+
+--
+-- TOC entry 3906 (class 2606 OID 18790)
+-- Name: rest_spool rest_spool_pkey; Type: CONSTRAINT; Schema: instance; Owner: -
+--
+
+ALTER TABLE ONLY instance.rest_spool
+ ADD CONSTRAINT rest_spool_pkey PRIMARY KEY (id);
--
--- TOC entry 3536 (class 2606 OID 18505)
+-- TOC entry 3857 (class 2606 OID 17325)
-- Name: schedule schedule_pkey; Type: CONSTRAINT; Schema: instance; Owner: -
--
ALTER TABLE ONLY instance.schedule
- ADD CONSTRAINT schedule_pkey PRIMARY KEY (id);
+ ADD CONSTRAINT schedule_pkey PRIMARY KEY (id);
--
--- TOC entry 3538 (class 2606 OID 17092)
+-- TOC entry 3859 (class 2606 OID 17327)
-- Name: schedule scheduler_task_name_key; Type: CONSTRAINT; Schema: instance; Owner: -
--
ALTER TABLE ONLY instance.schedule
- ADD CONSTRAINT scheduler_task_name_key UNIQUE (task_name) DEFERRABLE INITIALLY DEFERRED;
+ ADD CONSTRAINT scheduler_task_name_key UNIQUE (task_name) DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3540 (class 2606 OID 17095)
+-- TOC entry 3861 (class 2606 OID 17330)
-- Name: task task_pkey; Type: CONSTRAINT; Schema: instance; Owner: -
--
ALTER TABLE ONLY instance.task
- ADD CONSTRAINT task_pkey PRIMARY KEY (name);
+ ADD CONSTRAINT task_pkey PRIMARY KEY (name);
--
--- TOC entry 3599 (class 2606 OID 18478)
+-- TOC entry 3863 (class 2606 OID 17332)
-- Name: node node_pkey; Type: CONSTRAINT; Schema: instance_cluster; Owner: -
--
ALTER TABLE ONLY instance_cluster.node
- ADD CONSTRAINT node_pkey PRIMARY KEY (id);
+ ADD CONSTRAINT node_pkey PRIMARY KEY (id);
--
--- TOC entry 3604 (class 2606 OID 18515)
+-- TOC entry 3868 (class 2606 OID 17334)
-- Name: node_schedule node_schedule_pkey; Type: CONSTRAINT; Schema: instance_cluster; Owner: -
--
ALTER TABLE ONLY instance_cluster.node_schedule
- ADD CONSTRAINT node_schedule_pkey PRIMARY KEY (node_id, schedule_id);
+ ADD CONSTRAINT node_schedule_pkey PRIMARY KEY (node_id, schedule_id);
+
+
+--
+-- TOC entry 3895 (class 1259 OID 19350)
+-- Name: fki_api_module_fkey; Type: INDEX; Schema: app; Owner: -
+--
+
+CREATE INDEX fki_api_module_fkey ON app.api USING btree (module_id);
+
+
+--
+-- TOC entry 3887 (class 1259 OID 18462)
+-- Name: fki_article_form_article_id_fkey; Type: INDEX; Schema: app; Owner: -
+--
+
+CREATE INDEX fki_article_form_article_id_fkey ON app.article_form USING btree (article_id);
+
+
+--
+-- TOC entry 3888 (class 1259 OID 18463)
+-- Name: fki_article_form_form_id_fkey; Type: INDEX; Schema: app; Owner: -
+--
+
+CREATE INDEX fki_article_form_form_id_fkey ON app.article_form USING btree (form_id);
+
+
+--
+-- TOC entry 3889 (class 1259 OID 18477)
+-- Name: fki_article_help_article_id_fkey; Type: INDEX; Schema: app; Owner: -
+--
+
+CREATE INDEX fki_article_help_article_id_fkey ON app.article_help USING btree (article_id);
+
+
+--
+-- TOC entry 3890 (class 1259 OID 18478)
+-- Name: fki_article_help_module_id_fkey; Type: INDEX; Schema: app; Owner: -
+--
+
+CREATE INDEX fki_article_help_module_id_fkey ON app.article_help USING btree (module_id);
+
+
+--
+-- TOC entry 3886 (class 1259 OID 18448)
+-- Name: fki_article_module_id_fkey; Type: INDEX; Schema: app; Owner: -
+--
+
+CREATE INDEX fki_article_module_id_fkey ON app.article USING btree (module_id);
--
--- TOC entry 3310 (class 1259 OID 17096)
+-- TOC entry 3537 (class 1259 OID 17335)
-- Name: fki_attribute_icon_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3278,7 +4989,7 @@ CREATE INDEX fki_attribute_icon_id_fkey ON app.attribute USING btree (icon_id);
--
--- TOC entry 3311 (class 1259 OID 17097)
+-- TOC entry 3538 (class 1259 OID 17336)
-- Name: fki_attribute_relation_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3286,7 +4997,7 @@ CREATE INDEX fki_attribute_relation_fkey ON app.attribute USING btree (relation_
--
--- TOC entry 3312 (class 1259 OID 17098)
+-- TOC entry 3539 (class 1259 OID 17337)
-- Name: fki_attribute_relationship_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3294,7 +5005,15 @@ CREATE INDEX fki_attribute_relationship_fkey ON app.attribute USING btree (relat
--
--- TOC entry 3313 (class 1259 OID 17099)
+-- TOC entry 3540 (class 1259 OID 18484)
+-- Name: fki_caption_article_id_fkey; Type: INDEX; Schema: app; Owner: -
+--
+
+CREATE INDEX fki_caption_article_id_fkey ON app.caption USING btree (article_id);
+
+
+--
+-- TOC entry 3541 (class 1259 OID 17338)
-- Name: fki_caption_attribute_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3302,7 +5021,15 @@ CREATE INDEX fki_caption_attribute_id_fkey ON app.caption USING btree (attribute
--
--- TOC entry 3314 (class 1259 OID 17100)
+-- TOC entry 3542 (class 1259 OID 19334)
+-- Name: fki_caption_client_event_id_fkey; Type: INDEX; Schema: app; Owner: -
+--
+
+CREATE INDEX fki_caption_client_event_id_fkey ON app.caption USING btree (client_event_id);
+
+
+--
+-- TOC entry 3543 (class 1259 OID 17339)
-- Name: fki_caption_column_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3310,7 +5037,7 @@ CREATE INDEX fki_caption_column_id_fkey ON app.caption USING btree (column_id);
--
--- TOC entry 3315 (class 1259 OID 17101)
+-- TOC entry 3544 (class 1259 OID 17340)
-- Name: fki_caption_field_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3318,7 +5045,15 @@ CREATE INDEX fki_caption_field_id_fkey ON app.caption USING btree (field_id);
--
--- TOC entry 3316 (class 1259 OID 17102)
+-- TOC entry 3545 (class 1259 OID 19232)
+-- Name: fki_caption_form_action_id_fkey; Type: INDEX; Schema: app; Owner: -
+--
+
+CREATE INDEX fki_caption_form_action_id_fkey ON app.caption USING btree (form_action_id);
+
+
+--
+-- TOC entry 3546 (class 1259 OID 17341)
-- Name: fki_caption_form_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3326,7 +5061,15 @@ CREATE INDEX fki_caption_form_id_fkey ON app.caption USING btree (form_id);
--
--- TOC entry 3317 (class 1259 OID 17926)
+-- TOC entry 3547 (class 1259 OID 18485)
+-- Name: fki_caption_js_function_id_fkey; Type: INDEX; Schema: app; Owner: -
+--
+
+CREATE INDEX fki_caption_js_function_id_fkey ON app.caption USING btree (js_function_id);
+
+
+--
+-- TOC entry 3548 (class 1259 OID 17342)
-- Name: fki_caption_login_form_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3334,7 +5077,7 @@ CREATE INDEX fki_caption_login_form_id_fkey ON app.caption USING btree (login_fo
--
--- TOC entry 3318 (class 1259 OID 17103)
+-- TOC entry 3549 (class 1259 OID 17343)
-- Name: fki_caption_menu_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3342,7 +5085,15 @@ CREATE INDEX fki_caption_menu_id_fkey ON app.caption USING btree (menu_id);
--
--- TOC entry 3319 (class 1259 OID 17104)
+-- TOC entry 3550 (class 1259 OID 19586)
+-- Name: fki_caption_menu_tab_id_fkey; Type: INDEX; Schema: app; Owner: -
+--
+
+CREATE INDEX fki_caption_menu_tab_id_fkey ON app.caption USING btree (menu_tab_id);
+
+
+--
+-- TOC entry 3551 (class 1259 OID 17344)
-- Name: fki_caption_module_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3350,7 +5101,7 @@ CREATE INDEX fki_caption_module_id_fkey ON app.caption USING btree (module_id);
--
--- TOC entry 3320 (class 1259 OID 17864)
+-- TOC entry 3552 (class 1259 OID 17345)
-- Name: fki_caption_pg_function_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3358,7 +5109,7 @@ CREATE INDEX fki_caption_pg_function_id_fkey ON app.caption USING btree (pg_func
--
--- TOC entry 3321 (class 1259 OID 17105)
+-- TOC entry 3553 (class 1259 OID 17346)
-- Name: fki_caption_query_choice_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3366,7 +5117,7 @@ CREATE INDEX fki_caption_query_choice_id_fkey ON app.caption USING btree (query_
--
--- TOC entry 3322 (class 1259 OID 17106)
+-- TOC entry 3554 (class 1259 OID 17347)
-- Name: fki_caption_role_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3374,7 +5125,47 @@ CREATE INDEX fki_caption_role_id_fkey ON app.caption USING btree (role_id);
--
--- TOC entry 3586 (class 1259 OID 18245)
+-- TOC entry 3555 (class 1259 OID 18428)
+-- Name: fki_caption_tab_id_fkey; Type: INDEX; Schema: app; Owner: -
+--
+
+CREATE INDEX fki_caption_tab_id_fkey ON app.caption USING btree (tab_id);
+
+
+--
+-- TOC entry 3556 (class 1259 OID 18961)
+-- Name: fki_caption_widget_id_fkey; Type: INDEX; Schema: app; Owner: -
+--
+
+CREATE INDEX fki_caption_widget_id_fkey ON app.caption USING btree (widget_id);
+
+
+--
+-- TOC entry 3954 (class 1259 OID 19309)
+-- Name: fki_client_event_js_function_fkey; Type: INDEX; Schema: app; Owner: -
+--
+
+CREATE INDEX fki_client_event_js_function_fkey ON app.client_event USING btree (js_function_id);
+
+
+--
+-- TOC entry 3955 (class 1259 OID 19308)
+-- Name: fki_client_event_module_fkey; Type: INDEX; Schema: app; Owner: -
+--
+
+CREATE INDEX fki_client_event_module_fkey ON app.client_event USING btree (module_id);
+
+
+--
+-- TOC entry 3956 (class 1259 OID 19310)
+-- Name: fki_client_event_pg_function_fkey; Type: INDEX; Schema: app; Owner: -
+--
+
+CREATE INDEX fki_client_event_pg_function_fkey ON app.client_event USING btree (pg_function_id);
+
+
+--
+-- TOC entry 3564 (class 1259 OID 17348)
-- Name: fki_collection_consumer_collection_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3382,7 +5173,7 @@ CREATE INDEX fki_collection_consumer_collection_id_fkey ON app.collection_consum
--
--- TOC entry 3587 (class 1259 OID 18246)
+-- TOC entry 3565 (class 1259 OID 17349)
-- Name: fki_collection_consumer_column_id_display_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3390,7 +5181,7 @@ CREATE INDEX fki_collection_consumer_column_id_display_fkey ON app.collection_co
--
--- TOC entry 3588 (class 1259 OID 18247)
+-- TOC entry 3566 (class 1259 OID 17350)
-- Name: fki_collection_consumer_field_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3398,7 +5189,7 @@ CREATE INDEX fki_collection_consumer_field_id_fkey ON app.collection_consumer US
--
--- TOC entry 3589 (class 1259 OID 18388)
+-- TOC entry 3567 (class 1259 OID 17351)
-- Name: fki_collection_consumer_menu_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3406,7 +5197,15 @@ CREATE INDEX fki_collection_consumer_menu_id_fkey ON app.collection_consumer USI
--
--- TOC entry 3582 (class 1259 OID 18316)
+-- TOC entry 3568 (class 1259 OID 18968)
+-- Name: fki_collection_consumer_widget_id_fkey; Type: INDEX; Schema: app; Owner: -
+--
+
+CREATE INDEX fki_collection_consumer_widget_id_fkey ON app.collection_consumer USING btree (widget_id);
+
+
+--
+-- TOC entry 3560 (class 1259 OID 17352)
-- Name: fki_collection_icon_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3414,7 +5213,7 @@ CREATE INDEX fki_collection_icon_id_fkey ON app.collection USING btree (icon_id)
--
--- TOC entry 3583 (class 1259 OID 18192)
+-- TOC entry 3561 (class 1259 OID 17353)
-- Name: fki_collection_module_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3422,7 +5221,15 @@ CREATE INDEX fki_collection_module_id_fkey ON app.collection USING btree (module
--
--- TOC entry 3325 (class 1259 OID 17107)
+-- TOC entry 3571 (class 1259 OID 18679)
+-- Name: fki_column_api_id_fkey; Type: INDEX; Schema: app; Owner: -
+--
+
+CREATE INDEX fki_column_api_id_fkey ON app."column" USING btree (api_id);
+
+
+--
+-- TOC entry 3572 (class 1259 OID 17354)
-- Name: fki_column_attribute_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3430,7 +5237,7 @@ CREATE INDEX fki_column_attribute_id_fkey ON app."column" USING btree (attribute
--
--- TOC entry 3326 (class 1259 OID 18198)
+-- TOC entry 3573 (class 1259 OID 17355)
-- Name: fki_column_collection_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3438,7 +5245,7 @@ CREATE INDEX fki_column_collection_id_fkey ON app."column" USING btree (collecti
--
--- TOC entry 3327 (class 1259 OID 17108)
+-- TOC entry 3574 (class 1259 OID 17356)
-- Name: fki_column_field_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3446,7 +5253,7 @@ CREATE INDEX fki_column_field_id_fkey ON app."column" USING btree (field_id);
--
--- TOC entry 3337 (class 1259 OID 18151)
+-- TOC entry 3585 (class 1259 OID 17357)
-- Name: fki_field_button_js_function_id; Type: INDEX; Schema: app; Owner: -
--
@@ -3454,7 +5261,7 @@ CREATE INDEX fki_field_button_js_function_id ON app.field_button USING btree (js
--
--- TOC entry 3340 (class 1259 OID 17111)
+-- TOC entry 3588 (class 1259 OID 17358)
-- Name: fki_field_calendar_attribute_id_color_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3462,7 +5269,7 @@ CREATE INDEX fki_field_calendar_attribute_id_color_fkey ON app.field_calendar US
--
--- TOC entry 3341 (class 1259 OID 17112)
+-- TOC entry 3589 (class 1259 OID 17359)
-- Name: fki_field_calendar_attribute_id_date0_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3470,7 +5277,7 @@ CREATE INDEX fki_field_calendar_attribute_id_date0_fkey ON app.field_calendar US
--
--- TOC entry 3342 (class 1259 OID 17113)
+-- TOC entry 3590 (class 1259 OID 17360)
-- Name: fki_field_calendar_attribute_id_date1_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3478,7 +5285,7 @@ CREATE INDEX fki_field_calendar_attribute_id_date1_fkey ON app.field_calendar US
--
--- TOC entry 3348 (class 1259 OID 17115)
+-- TOC entry 3598 (class 1259 OID 17361)
-- Name: fki_field_data_attribute_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3486,7 +5293,7 @@ CREATE INDEX fki_field_data_attribute_fkey ON app.field_data USING btree (attrib
--
--- TOC entry 3349 (class 1259 OID 17116)
+-- TOC entry 3599 (class 1259 OID 17362)
-- Name: fki_field_data_attribute_id_alt_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3494,7 +5301,7 @@ CREATE INDEX fki_field_data_attribute_id_alt_fkey ON app.field_data USING btree
--
--- TOC entry 3350 (class 1259 OID 18157)
+-- TOC entry 3600 (class 1259 OID 17363)
-- Name: fki_field_data_js_function_id; Type: INDEX; Schema: app; Owner: -
--
@@ -3502,7 +5309,7 @@ CREATE INDEX fki_field_data_js_function_id ON app.field_data USING btree (js_fun
--
--- TOC entry 3353 (class 1259 OID 17117)
+-- TOC entry 3603 (class 1259 OID 17364)
-- Name: fki_field_data_relationship_attribute_id_nm_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3510,7 +5317,7 @@ CREATE INDEX fki_field_data_relationship_attribute_id_nm_fkey ON app.field_data_
--
--- TOC entry 3356 (class 1259 OID 17119)
+-- TOC entry 3606 (class 1259 OID 17365)
-- Name: fki_field_data_relationship_preset_field_id; Type: INDEX; Schema: app; Owner: -
--
@@ -3518,7 +5325,7 @@ CREATE INDEX fki_field_data_relationship_preset_field_id ON app.field_data_relat
--
--- TOC entry 3357 (class 1259 OID 17120)
+-- TOC entry 3607 (class 1259 OID 17366)
-- Name: fki_field_data_relationship_preset_preset_id; Type: INDEX; Schema: app; Owner: -
--
@@ -3526,7 +5333,7 @@ CREATE INDEX fki_field_data_relationship_preset_preset_id ON app.field_data_rela
--
--- TOC entry 3331 (class 1259 OID 17121)
+-- TOC entry 3578 (class 1259 OID 17367)
-- Name: fki_field_form_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3534,7 +5341,7 @@ CREATE INDEX fki_field_form_id_fkey ON app.field USING btree (form_id);
--
--- TOC entry 3332 (class 1259 OID 17122)
+-- TOC entry 3579 (class 1259 OID 17368)
-- Name: fki_field_icon_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3542,7 +5349,15 @@ CREATE INDEX fki_field_icon_id_fkey ON app.field USING btree (icon_id);
--
--- TOC entry 3333 (class 1259 OID 17124)
+-- TOC entry 3911 (class 1259 OID 18890)
+-- Name: fki_field_kanban_attribute_id_sort_fkey; Type: INDEX; Schema: app; Owner: -
+--
+
+CREATE INDEX fki_field_kanban_attribute_id_sort_fkey ON app.field_kanban USING btree (attribute_id_sort);
+
+
+--
+-- TOC entry 3580 (class 1259 OID 17369)
-- Name: fki_field_parent_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3550,7 +5365,55 @@ CREATE INDEX fki_field_parent_fkey ON app.field USING btree (parent_id);
--
--- TOC entry 3576 (class 1259 OID 18180)
+-- TOC entry 3978 (class 1259 OID 19508)
+-- Name: fki_field_variable_js_function_id; Type: INDEX; Schema: app; Owner: -
+--
+
+CREATE INDEX fki_field_variable_js_function_id ON app.field_variable USING btree (js_function_id);
+
+
+--
+-- TOC entry 3979 (class 1259 OID 19507)
+-- Name: fki_field_variable_variable_fkey; Type: INDEX; Schema: app; Owner: -
+--
+
+CREATE INDEX fki_field_variable_variable_fkey ON app.field_variable USING btree (variable_id);
+
+
+--
+-- TOC entry 3947 (class 1259 OID 19223)
+-- Name: fki_form_action_form_id_fkey; Type: INDEX; Schema: app; Owner: -
+--
+
+CREATE INDEX fki_form_action_form_id_fkey ON app.form_action USING btree (form_id);
+
+
+--
+-- TOC entry 3948 (class 1259 OID 19224)
+-- Name: fki_form_action_icon_id_fkey; Type: INDEX; Schema: app; Owner: -
+--
+
+CREATE INDEX fki_form_action_icon_id_fkey ON app.form_action USING btree (icon_id);
+
+
+--
+-- TOC entry 3949 (class 1259 OID 19225)
+-- Name: fki_form_action_js_function_id_fkey; Type: INDEX; Schema: app; Owner: -
+--
+
+CREATE INDEX fki_form_action_js_function_id_fkey ON app.form_action USING btree (js_function_id);
+
+
+--
+-- TOC entry 3612 (class 1259 OID 18902)
+-- Name: fki_form_field_id_focus_fkey; Type: INDEX; Schema: app; Owner: -
+--
+
+CREATE INDEX fki_form_field_id_focus_fkey ON app.form USING btree (field_id_focus);
+
+
+--
+-- TOC entry 3620 (class 1259 OID 17370)
-- Name: fki_form_function_form_id; Type: INDEX; Schema: app; Owner: -
--
@@ -3558,7 +5421,7 @@ CREATE INDEX fki_form_function_form_id ON app.form_function USING btree (form_id
--
--- TOC entry 3577 (class 1259 OID 18181)
+-- TOC entry 3621 (class 1259 OID 17371)
-- Name: fki_form_function_js_function_id; Type: INDEX; Schema: app; Owner: -
--
@@ -3566,7 +5429,7 @@ CREATE INDEX fki_form_function_js_function_id ON app.form_function USING btree (
--
--- TOC entry 3362 (class 1259 OID 17125)
+-- TOC entry 3613 (class 1259 OID 17372)
-- Name: fki_form_icon_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3574,7 +5437,7 @@ CREATE INDEX fki_form_icon_id_fkey ON app.form USING btree (icon_id);
--
--- TOC entry 3363 (class 1259 OID 17126)
+-- TOC entry 3614 (class 1259 OID 17373)
-- Name: fki_form_module_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3582,7 +5445,7 @@ CREATE INDEX fki_form_module_fkey ON app.form USING btree (module_id);
--
--- TOC entry 3364 (class 1259 OID 17127)
+-- TOC entry 3615 (class 1259 OID 17374)
-- Name: fki_form_preset_id_open_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3590,7 +5453,7 @@ CREATE INDEX fki_form_preset_id_open_fkey ON app.form USING btree (preset_id_ope
--
--- TOC entry 3372 (class 1259 OID 17130)
+-- TOC entry 3627 (class 1259 OID 17375)
-- Name: fki_form_state_condition_form_state_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3598,7 +5461,7 @@ CREATE INDEX fki_form_state_condition_form_state_id_fkey ON app.form_state_condi
--
--- TOC entry 3590 (class 1259 OID 18304)
+-- TOC entry 3630 (class 1259 OID 17376)
-- Name: fki_form_state_condition_side_collection_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3606,7 +5469,7 @@ CREATE INDEX fki_form_state_condition_side_collection_id_fkey ON app.form_state_
--
--- TOC entry 3591 (class 1259 OID 18305)
+-- TOC entry 3631 (class 1259 OID 17377)
-- Name: fki_form_state_condition_side_column_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3614,7 +5477,7 @@ CREATE INDEX fki_form_state_condition_side_column_id_fkey ON app.form_state_cond
--
--- TOC entry 3592 (class 1259 OID 18306)
+-- TOC entry 3632 (class 1259 OID 17378)
-- Name: fki_form_state_condition_side_field_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3622,7 +5485,7 @@ CREATE INDEX fki_form_state_condition_side_field_id_fkey ON app.form_state_condi
--
--- TOC entry 3593 (class 1259 OID 18307)
+-- TOC entry 3633 (class 1259 OID 17379)
-- Name: fki_form_state_condition_side_form_state_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3630,7 +5493,15 @@ CREATE INDEX fki_form_state_condition_side_form_state_id_fkey ON app.form_state_
--
--- TOC entry 3594 (class 1259 OID 18308)
+-- TOC entry 3634 (class 1259 OID 19603)
+-- Name: fki_form_state_condition_side_form_state_id_result_fkey; Type: INDEX; Schema: app; Owner: -
+--
+
+CREATE INDEX fki_form_state_condition_side_form_state_id_result_fkey ON app.form_state_condition_side USING btree (form_state_id_result);
+
+
+--
+-- TOC entry 3635 (class 1259 OID 17380)
-- Name: fki_form_state_condition_side_preset_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3638,7 +5509,7 @@ CREATE INDEX fki_form_state_condition_side_preset_id_fkey ON app.form_state_cond
--
--- TOC entry 3595 (class 1259 OID 18309)
+-- TOC entry 3636 (class 1259 OID 17381)
-- Name: fki_form_state_condition_side_role_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3646,7 +5517,15 @@ CREATE INDEX fki_form_state_condition_side_role_id_fkey ON app.form_state_condit
--
--- TOC entry 3375 (class 1259 OID 17133)
+-- TOC entry 3637 (class 1259 OID 19475)
+-- Name: fki_form_state_condition_side_variable_id_fkey; Type: INDEX; Schema: app; Owner: -
+--
+
+CREATE INDEX fki_form_state_condition_side_variable_id_fkey ON app.form_state_condition_side USING btree (variable_id);
+
+
+--
+-- TOC entry 3640 (class 1259 OID 17382)
-- Name: fki_form_state_effect_field_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3654,7 +5533,15 @@ CREATE INDEX fki_form_state_effect_field_id_fkey ON app.form_state_effect USING
--
--- TOC entry 3376 (class 1259 OID 17134)
+-- TOC entry 3641 (class 1259 OID 19244)
+-- Name: fki_form_state_effect_form_action_id_fkey; Type: INDEX; Schema: app; Owner: -
+--
+
+CREATE INDEX fki_form_state_effect_form_action_id_fkey ON app.form_state_effect USING btree (form_action_id);
+
+
+--
+-- TOC entry 3642 (class 1259 OID 17383)
-- Name: fki_form_state_effect_form_state_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3662,7 +5549,15 @@ CREATE INDEX fki_form_state_effect_form_state_id_fkey ON app.form_state_effect U
--
--- TOC entry 3369 (class 1259 OID 17135)
+-- TOC entry 3643 (class 1259 OID 18434)
+-- Name: fki_form_state_effect_tab_id_fkey; Type: INDEX; Schema: app; Owner: -
+--
+
+CREATE INDEX fki_form_state_effect_tab_id_fkey ON app.form_state_effect USING btree (tab_id);
+
+
+--
+-- TOC entry 3624 (class 1259 OID 17384)
-- Name: fki_form_state_form_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3670,7 +5565,7 @@ CREATE INDEX fki_form_state_form_id_fkey ON app.form_state USING btree (form_id)
--
--- TOC entry 3377 (class 1259 OID 17136)
+-- TOC entry 3644 (class 1259 OID 17385)
-- Name: fki_icon_module_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3678,7 +5573,7 @@ CREATE INDEX fki_icon_module_id_fkey ON app.icon USING btree (module_id);
--
--- TOC entry 3569 (class 1259 OID 18372)
+-- TOC entry 3653 (class 1259 OID 17386)
-- Name: fki_js_function_depends_collection_id_on_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3686,7 +5581,7 @@ CREATE INDEX fki_js_function_depends_collection_id_on_fkey ON app.js_function_de
--
--- TOC entry 3570 (class 1259 OID 18131)
+-- TOC entry 3654 (class 1259 OID 17387)
-- Name: fki_js_function_depends_field_id_on; Type: INDEX; Schema: app; Owner: -
--
@@ -3694,7 +5589,7 @@ CREATE INDEX fki_js_function_depends_field_id_on ON app.js_function_depends USIN
--
--- TOC entry 3571 (class 1259 OID 18132)
+-- TOC entry 3655 (class 1259 OID 17388)
-- Name: fki_js_function_depends_form_id_on; Type: INDEX; Schema: app; Owner: -
--
@@ -3702,7 +5597,7 @@ CREATE INDEX fki_js_function_depends_form_id_on ON app.js_function_depends USING
--
--- TOC entry 3572 (class 1259 OID 18134)
+-- TOC entry 3656 (class 1259 OID 17389)
-- Name: fki_js_function_depends_js_function_id; Type: INDEX; Schema: app; Owner: -
--
@@ -3710,7 +5605,7 @@ CREATE INDEX fki_js_function_depends_js_function_id ON app.js_function_depends U
--
--- TOC entry 3573 (class 1259 OID 18135)
+-- TOC entry 3657 (class 1259 OID 17390)
-- Name: fki_js_function_depends_js_function_id_on; Type: INDEX; Schema: app; Owner: -
--
@@ -3718,7 +5613,7 @@ CREATE INDEX fki_js_function_depends_js_function_id_on ON app.js_function_depend
--
--- TOC entry 3574 (class 1259 OID 18136)
+-- TOC entry 3658 (class 1259 OID 17391)
-- Name: fki_js_function_depends_pg_function_id_on; Type: INDEX; Schema: app; Owner: -
--
@@ -3726,7 +5621,7 @@ CREATE INDEX fki_js_function_depends_pg_function_id_on ON app.js_function_depend
--
--- TOC entry 3575 (class 1259 OID 18133)
+-- TOC entry 3659 (class 1259 OID 17392)
-- Name: fki_js_function_depends_role_id_on; Type: INDEX; Schema: app; Owner: -
--
@@ -3734,23 +5629,39 @@ CREATE INDEX fki_js_function_depends_role_id_on ON app.js_function_depends USING
--
--- TOC entry 3563 (class 1259 OID 18096)
--- Name: fki_js_function_form_id; Type: INDEX; Schema: app; Owner: -
+-- TOC entry 3660 (class 1259 OID 19469)
+-- Name: fki_js_function_depends_variable_id_on_fkey; Type: INDEX; Schema: app; Owner: -
--
-CREATE INDEX fki_js_function_form_id ON app.js_function USING btree (form_id);
+CREATE INDEX fki_js_function_depends_variable_id_on_fkey ON app.js_function_depends USING btree (variable_id_on);
--
--- TOC entry 3564 (class 1259 OID 18097)
--- Name: fki_js_function_module_id; Type: INDEX; Schema: app; Owner: -
+-- TOC entry 3647 (class 1259 OID 17393)
+-- Name: fki_js_function_form_id; Type: INDEX; Schema: app; Owner: -
--
-CREATE INDEX fki_js_function_module_id ON app.js_function USING btree (module_id);
+CREATE INDEX fki_js_function_form_id ON app.js_function USING btree (form_id);
--
--- TOC entry 3541 (class 1259 OID 17920)
+-- TOC entry 3673 (class 1259 OID 19671)
+-- Name: fki_js_function_id_on_login_fkey; Type: INDEX; Schema: app; Owner: -
+--
+
+CREATE INDEX fki_js_function_id_on_login_fkey ON app.module USING btree (js_function_id_on_login);
+
+
+--
+-- TOC entry 3648 (class 1259 OID 17394)
+-- Name: fki_js_function_module_id; Type: INDEX; Schema: app; Owner: -
+--
+
+CREATE INDEX fki_js_function_module_id ON app.js_function USING btree (module_id);
+
+
+--
+-- TOC entry 3661 (class 1259 OID 17395)
-- Name: fki_login_form_module_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3758,7 +5669,7 @@ CREATE INDEX fki_login_form_module_fkey ON app.login_form USING btree (module_id
--
--- TOC entry 3380 (class 1259 OID 17137)
+-- TOC entry 3666 (class 1259 OID 17396)
-- Name: fki_menu_form_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3766,7 +5677,7 @@ CREATE INDEX fki_menu_form_id_fkey ON app.menu USING btree (form_id);
--
--- TOC entry 3381 (class 1259 OID 17138)
+-- TOC entry 3667 (class 1259 OID 17397)
-- Name: fki_menu_icon_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3774,7 +5685,7 @@ CREATE INDEX fki_menu_icon_id_fkey ON app.menu USING btree (icon_id);
--
--- TOC entry 3382 (class 1259 OID 17139)
+-- TOC entry 3668 (class 1259 OID 17398)
-- Name: fki_menu_module_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3782,7 +5693,7 @@ CREATE INDEX fki_menu_module_id_fkey ON app.menu USING btree (module_id);
--
--- TOC entry 3383 (class 1259 OID 17140)
+-- TOC entry 3669 (class 1259 OID 17399)
-- Name: fki_menu_parent_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3790,7 +5701,23 @@ CREATE INDEX fki_menu_parent_id_fkey ON app.menu USING btree (parent_id);
--
--- TOC entry 3394 (class 1259 OID 17141)
+-- TOC entry 3980 (class 1259 OID 19578)
+-- Name: fki_menu_tab_icon_id_fkey; Type: INDEX; Schema: app; Owner: -
+--
+
+CREATE INDEX fki_menu_tab_icon_id_fkey ON app.menu_tab USING btree (icon_id);
+
+
+--
+-- TOC entry 3981 (class 1259 OID 19579)
+-- Name: fki_menu_tab_module_id_fkey; Type: INDEX; Schema: app; Owner: -
+--
+
+CREATE INDEX fki_menu_tab_module_id_fkey ON app.menu_tab USING btree (module_id);
+
+
+--
+-- TOC entry 3684 (class 1259 OID 17400)
-- Name: fki_module_depends_module_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3798,7 +5725,7 @@ CREATE INDEX fki_module_depends_module_id_fkey ON app.module_depends USING btree
--
--- TOC entry 3395 (class 1259 OID 17142)
+-- TOC entry 3685 (class 1259 OID 17401)
-- Name: fki_module_depends_module_id_on_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3806,7 +5733,7 @@ CREATE INDEX fki_module_depends_module_id_on_fkey ON app.module_depends USING bt
--
--- TOC entry 3387 (class 1259 OID 17143)
+-- TOC entry 3674 (class 1259 OID 17402)
-- Name: fki_module_form_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3814,7 +5741,7 @@ CREATE INDEX fki_module_form_id_fkey ON app.module USING btree (form_id);
--
--- TOC entry 3388 (class 1259 OID 17144)
+-- TOC entry 3675 (class 1259 OID 17403)
-- Name: fki_module_icon_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3822,7 +5749,23 @@ CREATE INDEX fki_module_icon_id_fkey ON app.module USING btree (icon_id);
--
--- TOC entry 3389 (class 1259 OID 17145)
+-- TOC entry 3676 (class 1259 OID 18809)
+-- Name: fki_module_icon_id_pwa1_fkey; Type: INDEX; Schema: app; Owner: -
+--
+
+CREATE INDEX fki_module_icon_id_pwa1_fkey ON app.module USING btree (icon_id_pwa1);
+
+
+--
+-- TOC entry 3677 (class 1259 OID 18810)
+-- Name: fki_module_icon_id_pwa2_fkey; Type: INDEX; Schema: app; Owner: -
+--
+
+CREATE INDEX fki_module_icon_id_pwa2_fkey ON app.module USING btree (icon_id_pwa2);
+
+
+--
+-- TOC entry 3678 (class 1259 OID 17404)
-- Name: fki_module_parent_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3830,7 +5773,7 @@ CREATE INDEX fki_module_parent_id_fkey ON app.module USING btree (parent_id);
--
--- TOC entry 3554 (class 1259 OID 18036)
+-- TOC entry 3688 (class 1259 OID 17405)
-- Name: fki_module_start_form_form_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3838,7 +5781,7 @@ CREATE INDEX fki_module_start_form_form_id_fkey ON app.module_start_form USING b
--
--- TOC entry 3555 (class 1259 OID 18034)
+-- TOC entry 3689 (class 1259 OID 17406)
-- Name: fki_module_start_form_module_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3846,7 +5789,7 @@ CREATE INDEX fki_module_start_form_module_id_fkey ON app.module_start_form USING
--
--- TOC entry 3556 (class 1259 OID 18035)
+-- TOC entry 3690 (class 1259 OID 17407)
-- Name: fki_module_start_form_role_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3854,7 +5797,7 @@ CREATE INDEX fki_module_start_form_role_id_fkey ON app.module_start_form USING b
--
--- TOC entry 3559 (class 1259 OID 18071)
+-- TOC entry 3693 (class 1259 OID 17408)
-- Name: fki_open_form_attribute_id_apply_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3862,7 +5805,7 @@ CREATE INDEX fki_open_form_attribute_id_apply_fkey ON app.open_form USING btree
--
--- TOC entry 3560 (class 1259 OID 18411)
+-- TOC entry 3694 (class 1259 OID 17409)
-- Name: fki_open_form_collection_consumer_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3870,7 +5813,7 @@ CREATE INDEX fki_open_form_collection_consumer_id_fkey ON app.open_form USING bt
--
--- TOC entry 3561 (class 1259 OID 18070)
+-- TOC entry 3695 (class 1259 OID 17410)
-- Name: fki_open_form_column_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3878,7 +5821,7 @@ CREATE INDEX fki_open_form_column_id_fkey ON app.open_form USING btree (column_i
--
--- TOC entry 3562 (class 1259 OID 18069)
+-- TOC entry 3696 (class 1259 OID 17411)
-- Name: fki_open_form_field_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3886,7 +5829,7 @@ CREATE INDEX fki_open_form_field_id_fkey ON app.open_form USING btree (field_id)
--
--- TOC entry 3403 (class 1259 OID 17146)
+-- TOC entry 3702 (class 1259 OID 17412)
-- Name: fki_pg_function_depends_attribute_id_on_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3894,7 +5837,7 @@ CREATE INDEX fki_pg_function_depends_attribute_id_on_fkey ON app.pg_function_dep
--
--- TOC entry 3404 (class 1259 OID 17147)
+-- TOC entry 3703 (class 1259 OID 17413)
-- Name: fki_pg_function_depends_module_id_on_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3902,7 +5845,7 @@ CREATE INDEX fki_pg_function_depends_module_id_on_fkey ON app.pg_function_depend
--
--- TOC entry 3405 (class 1259 OID 17148)
+-- TOC entry 3704 (class 1259 OID 17414)
-- Name: fki_pg_function_depends_pg_function_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3910,7 +5853,7 @@ CREATE INDEX fki_pg_function_depends_pg_function_id_fkey ON app.pg_function_depe
--
--- TOC entry 3406 (class 1259 OID 17149)
+-- TOC entry 3705 (class 1259 OID 17415)
-- Name: fki_pg_function_depends_pg_function_id_on_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3918,7 +5861,7 @@ CREATE INDEX fki_pg_function_depends_pg_function_id_on_fkey ON app.pg_function_d
--
--- TOC entry 3407 (class 1259 OID 17150)
+-- TOC entry 3706 (class 1259 OID 17416)
-- Name: fki_pg_function_depends_relation_id_on_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3926,7 +5869,15 @@ CREATE INDEX fki_pg_function_depends_relation_id_on_fkey ON app.pg_function_depe
--
--- TOC entry 3398 (class 1259 OID 17151)
+-- TOC entry 3679 (class 1259 OID 19411)
+-- Name: fki_pg_function_id_login_sync_fkey; Type: INDEX; Schema: app; Owner: -
+--
+
+CREATE INDEX fki_pg_function_id_login_sync_fkey ON app.module USING btree (pg_function_id_login_sync);
+
+
+--
+-- TOC entry 3697 (class 1259 OID 17417)
-- Name: fki_pg_function_module_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3934,7 +5885,7 @@ CREATE INDEX fki_pg_function_module_id_fkey ON app.pg_function USING btree (modu
--
--- TOC entry 3408 (class 1259 OID 17152)
+-- TOC entry 3707 (class 1259 OID 17418)
-- Name: fki_pg_function_schedule_pg_function_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3942,7 +5893,7 @@ CREATE INDEX fki_pg_function_schedule_pg_function_id_fkey ON app.pg_function_sch
--
--- TOC entry 3414 (class 1259 OID 17153)
+-- TOC entry 3714 (class 1259 OID 17419)
-- Name: fki_pg_index_attribute_attribute_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3950,7 +5901,15 @@ CREATE INDEX fki_pg_index_attribute_attribute_id_fkey ON app.pg_index_attribute
--
--- TOC entry 3415 (class 1259 OID 17154)
+-- TOC entry 3710 (class 1259 OID 18753)
+-- Name: fki_pg_index_attribute_id_dict_fkey; Type: INDEX; Schema: app; Owner: -
+--
+
+CREATE INDEX fki_pg_index_attribute_id_dict_fkey ON app.pg_index USING btree (attribute_id_dict);
+
+
+--
+-- TOC entry 3715 (class 1259 OID 17420)
-- Name: fki_pg_index_attribute_pg_index_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3958,7 +5917,7 @@ CREATE INDEX fki_pg_index_attribute_pg_index_id_fkey ON app.pg_index_attribute U
--
--- TOC entry 3411 (class 1259 OID 17155)
+-- TOC entry 3711 (class 1259 OID 17421)
-- Name: fki_pg_index_relation_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3966,7 +5925,15 @@ CREATE INDEX fki_pg_index_relation_id_fkey ON app.pg_index USING btree (relation
--
--- TOC entry 3416 (class 1259 OID 17156)
+-- TOC entry 3716 (class 1259 OID 19038)
+-- Name: fki_pg_trigger_module_id_fkey; Type: INDEX; Schema: app; Owner: -
+--
+
+CREATE INDEX fki_pg_trigger_module_id_fkey ON app.pg_trigger USING btree (module_id);
+
+
+--
+-- TOC entry 3717 (class 1259 OID 17422)
-- Name: fki_pg_trigger_pg_function_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3974,7 +5941,7 @@ CREATE INDEX fki_pg_trigger_pg_function_id_fkey ON app.pg_trigger USING btree (p
--
--- TOC entry 3417 (class 1259 OID 17157)
+-- TOC entry 3718 (class 1259 OID 17423)
-- Name: fki_pg_trigger_relation_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3982,7 +5949,7 @@ CREATE INDEX fki_pg_trigger_relation_id_fkey ON app.pg_trigger USING btree (rela
--
--- TOC entry 3420 (class 1259 OID 17158)
+-- TOC entry 3721 (class 1259 OID 17424)
-- Name: fki_preset_relation_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3990,7 +5957,7 @@ CREATE INDEX fki_preset_relation_id_fkey ON app.preset USING btree (relation_id)
--
--- TOC entry 3425 (class 1259 OID 17159)
+-- TOC entry 3726 (class 1259 OID 17425)
-- Name: fki_preset_value_attribute_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -3998,7 +5965,7 @@ CREATE INDEX fki_preset_value_attribute_id_fkey ON app.preset_value USING btree
--
--- TOC entry 3426 (class 1259 OID 17160)
+-- TOC entry 3727 (class 1259 OID 17426)
-- Name: fki_preset_value_preset_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -4006,7 +5973,7 @@ CREATE INDEX fki_preset_value_preset_id_fkey ON app.preset_value USING btree (pr
--
--- TOC entry 3427 (class 1259 OID 17161)
+-- TOC entry 3728 (class 1259 OID 17427)
-- Name: fki_preset_value_preset_id_refer_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -4014,7 +5981,15 @@ CREATE INDEX fki_preset_value_preset_id_refer_fkey ON app.preset_value USING btr
--
--- TOC entry 3436 (class 1259 OID 17162)
+-- TOC entry 3731 (class 1259 OID 18672)
+-- Name: fki_query_api_id_fkey; Type: INDEX; Schema: app; Owner: -
+--
+
+CREATE INDEX fki_query_api_id_fkey ON app.query USING btree (api_id);
+
+
+--
+-- TOC entry 3738 (class 1259 OID 17428)
-- Name: fki_query_choice_query_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -4022,7 +5997,7 @@ CREATE INDEX fki_query_choice_query_id_fkey ON app.query_choice USING btree (que
--
--- TOC entry 3430 (class 1259 OID 18205)
+-- TOC entry 3732 (class 1259 OID 17429)
-- Name: fki_query_collection_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -4030,7 +6005,7 @@ CREATE INDEX fki_query_collection_id_fkey ON app.query USING btree (collection_i
--
--- TOC entry 3431 (class 1259 OID 17163)
+-- TOC entry 3733 (class 1259 OID 17430)
-- Name: fki_query_field_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -4038,7 +6013,7 @@ CREATE INDEX fki_query_field_id_fkey ON app.query USING btree (field_id);
--
--- TOC entry 3441 (class 1259 OID 17164)
+-- TOC entry 3743 (class 1259 OID 17431)
-- Name: fki_query_filter_query_choice_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -4046,7 +6021,7 @@ CREATE INDEX fki_query_filter_query_choice_id_fkey ON app.query_filter USING btr
--
--- TOC entry 3442 (class 1259 OID 17165)
+-- TOC entry 3744 (class 1259 OID 17432)
-- Name: fki_query_filter_query_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -4054,7 +6029,7 @@ CREATE INDEX fki_query_filter_query_id_fkey ON app.query_filter USING btree (que
--
--- TOC entry 3446 (class 1259 OID 17166)
+-- TOC entry 3748 (class 1259 OID 17433)
-- Name: fki_query_filter_side_attribute_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -4062,7 +6037,7 @@ CREATE INDEX fki_query_filter_side_attribute_id_fkey ON app.query_filter_side US
--
--- TOC entry 3447 (class 1259 OID 18219)
+-- TOC entry 3749 (class 1259 OID 17434)
-- Name: fki_query_filter_side_collection_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -4070,7 +6045,7 @@ CREATE INDEX fki_query_filter_side_collection_id_fkey ON app.query_filter_side U
--
--- TOC entry 3448 (class 1259 OID 18220)
+-- TOC entry 3750 (class 1259 OID 17435)
-- Name: fki_query_filter_side_column_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -4078,7 +6053,15 @@ CREATE INDEX fki_query_filter_side_column_id_fkey ON app.query_filter_side USING
--
--- TOC entry 3449 (class 1259 OID 17167)
+-- TOC entry 3751 (class 1259 OID 18726)
+-- Name: fki_query_filter_side_content_fkey; Type: INDEX; Schema: app; Owner: -
+--
+
+CREATE INDEX fki_query_filter_side_content_fkey ON app.query_filter_side USING btree (content);
+
+
+--
+-- TOC entry 3752 (class 1259 OID 17436)
-- Name: fki_query_filter_side_field_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -4086,7 +6069,15 @@ CREATE INDEX fki_query_filter_side_field_id_fkey ON app.query_filter_side USING
--
--- TOC entry 3450 (class 1259 OID 17168)
+-- TOC entry 3753 (class 1259 OID 18725)
+-- Name: fki_query_filter_side_preset_id_fkey; Type: INDEX; Schema: app; Owner: -
+--
+
+CREATE INDEX fki_query_filter_side_preset_id_fkey ON app.query_filter_side USING btree (preset_id);
+
+
+--
+-- TOC entry 3754 (class 1259 OID 17437)
-- Name: fki_query_filter_side_query_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -4094,7 +6085,7 @@ CREATE INDEX fki_query_filter_side_query_id_fkey ON app.query_filter_side USING
--
--- TOC entry 3451 (class 1259 OID 17169)
+-- TOC entry 3755 (class 1259 OID 17438)
-- Name: fki_query_filter_side_role_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -4102,7 +6093,15 @@ CREATE INDEX fki_query_filter_side_role_id_fkey ON app.query_filter_side USING b
--
--- TOC entry 3432 (class 1259 OID 17170)
+-- TOC entry 3756 (class 1259 OID 19481)
+-- Name: fki_query_filter_side_variable_id_fkey; Type: INDEX; Schema: app; Owner: -
+--
+
+CREATE INDEX fki_query_filter_side_variable_id_fkey ON app.query_filter_side USING btree (variable_id);
+
+
+--
+-- TOC entry 3734 (class 1259 OID 17439)
-- Name: fki_query_form_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -4110,7 +6109,7 @@ CREATE INDEX fki_query_form_id_fkey ON app.query USING btree (form_id);
--
--- TOC entry 3454 (class 1259 OID 17171)
+-- TOC entry 3759 (class 1259 OID 17440)
-- Name: fki_query_join_attribute_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -4118,7 +6117,7 @@ CREATE INDEX fki_query_join_attribute_id_fkey ON app.query_join USING btree (att
--
--- TOC entry 3455 (class 1259 OID 17172)
+-- TOC entry 3760 (class 1259 OID 17441)
-- Name: fki_query_join_query_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -4126,7 +6125,7 @@ CREATE INDEX fki_query_join_query_id_fkey ON app.query_join USING btree (query_i
--
--- TOC entry 3456 (class 1259 OID 17173)
+-- TOC entry 3761 (class 1259 OID 17442)
-- Name: fki_query_join_relation_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -4134,7 +6133,7 @@ CREATE INDEX fki_query_join_relation_id_fkey ON app.query_join USING btree (rela
--
--- TOC entry 3460 (class 1259 OID 17174)
+-- TOC entry 3765 (class 1259 OID 17443)
-- Name: fki_query_lookup_pg_index_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -4142,7 +6141,7 @@ CREATE INDEX fki_query_lookup_pg_index_id_fkey ON app.query_lookup USING btree (
--
--- TOC entry 3461 (class 1259 OID 17175)
+-- TOC entry 3766 (class 1259 OID 17444)
-- Name: fki_query_lookup_query_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -4150,7 +6149,7 @@ CREATE INDEX fki_query_lookup_query_id_fkey ON app.query_lookup USING btree (que
--
--- TOC entry 3462 (class 1259 OID 17176)
+-- TOC entry 3767 (class 1259 OID 17445)
-- Name: fki_query_order_attribute_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -4158,7 +6157,7 @@ CREATE INDEX fki_query_order_attribute_id_fkey ON app.query_order USING btree (a
--
--- TOC entry 3463 (class 1259 OID 17177)
+-- TOC entry 3768 (class 1259 OID 17446)
-- Name: fki_query_order_query_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -4166,7 +6165,7 @@ CREATE INDEX fki_query_order_query_id_fkey ON app.query_order USING btree (query
--
--- TOC entry 3433 (class 1259 OID 17178)
+-- TOC entry 3735 (class 1259 OID 17447)
-- Name: fki_query_relation_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -4174,7 +6173,7 @@ CREATE INDEX fki_query_relation_id_fkey ON app.query USING btree (relation_id);
--
--- TOC entry 3466 (class 1259 OID 17179)
+-- TOC entry 3771 (class 1259 OID 17448)
-- Name: fki_relation_module_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -4182,7 +6181,7 @@ CREATE INDEX fki_relation_module_fkey ON app.relation USING btree (module_id);
--
--- TOC entry 3548 (class 1259 OID 18005)
+-- TOC entry 3774 (class 1259 OID 17449)
-- Name: fki_relation_policy_pg_function_id_excl_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -4190,7 +6189,7 @@ CREATE INDEX fki_relation_policy_pg_function_id_excl_fkey ON app.relation_policy
--
--- TOC entry 3549 (class 1259 OID 18006)
+-- TOC entry 3775 (class 1259 OID 17450)
-- Name: fki_relation_policy_pg_function_id_incl_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -4198,7 +6197,7 @@ CREATE INDEX fki_relation_policy_pg_function_id_incl_fkey ON app.relation_policy
--
--- TOC entry 3550 (class 1259 OID 18007)
+-- TOC entry 3776 (class 1259 OID 17451)
-- Name: fki_relation_policy_relation_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -4206,7 +6205,7 @@ CREATE INDEX fki_relation_policy_relation_id_fkey ON app.relation_policy USING b
--
--- TOC entry 3551 (class 1259 OID 18008)
+-- TOC entry 3777 (class 1259 OID 17452)
-- Name: fki_relation_policy_role_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -4214,7 +6213,15 @@ CREATE INDEX fki_relation_policy_role_id_fkey ON app.relation_policy USING btree
--
--- TOC entry 3473 (class 1259 OID 17180)
+-- TOC entry 3784 (class 1259 OID 18686)
+-- Name: fki_role_access_api_id_fkey; Type: INDEX; Schema: app; Owner: -
+--
+
+CREATE INDEX fki_role_access_api_id_fkey ON app.role_access USING btree (api_id);
+
+
+--
+-- TOC entry 3785 (class 1259 OID 17453)
-- Name: fki_role_access_attribute_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -4222,7 +6229,15 @@ CREATE INDEX fki_role_access_attribute_id_fkey ON app.role_access USING btree (a
--
--- TOC entry 3474 (class 1259 OID 18226)
+-- TOC entry 3786 (class 1259 OID 19346)
+-- Name: fki_role_access_client_event_id_fkey; Type: INDEX; Schema: app; Owner: -
+--
+
+CREATE INDEX fki_role_access_client_event_id_fkey ON app.role_access USING btree (client_event_id);
+
+
+--
+-- TOC entry 3787 (class 1259 OID 17454)
-- Name: fki_role_access_collection_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -4230,7 +6245,7 @@ CREATE INDEX fki_role_access_collection_id_fkey ON app.role_access USING btree (
--
--- TOC entry 3475 (class 1259 OID 17181)
+-- TOC entry 3788 (class 1259 OID 17455)
-- Name: fki_role_access_menu_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -4238,7 +6253,7 @@ CREATE INDEX fki_role_access_menu_id_fkey ON app.role_access USING btree (menu_i
--
--- TOC entry 3476 (class 1259 OID 17182)
+-- TOC entry 3789 (class 1259 OID 17456)
-- Name: fki_role_access_relation_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -4246,7 +6261,7 @@ CREATE INDEX fki_role_access_relation_id_fkey ON app.role_access USING btree (re
--
--- TOC entry 3477 (class 1259 OID 17183)
+-- TOC entry 3790 (class 1259 OID 17457)
-- Name: fki_role_access_role_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -4254,7 +6269,15 @@ CREATE INDEX fki_role_access_role_id_fkey ON app.role_access USING btree (role_i
--
--- TOC entry 3478 (class 1259 OID 17184)
+-- TOC entry 3791 (class 1259 OID 18976)
+-- Name: fki_role_access_widget_id_fkey; Type: INDEX; Schema: app; Owner: -
+--
+
+CREATE INDEX fki_role_access_widget_id_fkey ON app.role_access USING btree (widget_id);
+
+
+--
+-- TOC entry 3792 (class 1259 OID 17458)
-- Name: fki_role_child_role_id_child_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -4262,7 +6285,7 @@ CREATE INDEX fki_role_child_role_id_child_fkey ON app.role_child USING btree (ro
--
--- TOC entry 3479 (class 1259 OID 17185)
+-- TOC entry 3793 (class 1259 OID 17459)
-- Name: fki_role_child_role_id_fkey; Type: INDEX; Schema: app; Owner: -
--
@@ -4270,1758 +6293,3110 @@ CREATE INDEX fki_role_child_role_id_fkey ON app.role_child USING btree (role_id)
--
--- TOC entry 3328 (class 1259 OID 17186)
--- Name: ind_column_position; Type: INDEX; Schema: app; Owner: -
+-- TOC entry 3877 (class 1259 OID 18416)
+-- Name: fki_tab_field_id_fkey; Type: INDEX; Schema: app; Owner: -
--
-CREATE INDEX ind_column_position ON app."column" USING btree ("position");
+CREATE INDEX fki_tab_field_id_fkey ON app.tab USING btree (field_id);
--
--- TOC entry 3343 (class 1259 OID 17187)
--- Name: ind_field_calendar_ics; Type: INDEX; Schema: app; Owner: -
+-- TOC entry 3581 (class 1259 OID 18422)
+-- Name: fki_tab_id_fkey; Type: INDEX; Schema: app; Owner: -
--
-CREATE INDEX ind_field_calendar_ics ON app.field_calendar USING btree (ics);
+CREATE INDEX fki_tab_id_fkey ON app.field USING btree (tab_id);
--
--- TOC entry 3334 (class 1259 OID 17188)
--- Name: ind_field_position; Type: INDEX; Schema: app; Owner: -
+-- TOC entry 3970 (class 1259 OID 19461)
+-- Name: fki_variable_form_id_fkey; Type: INDEX; Schema: app; Owner: -
--
-CREATE INDEX ind_field_position ON app.field USING btree ("position");
+CREATE INDEX fki_variable_form_id_fkey ON app.variable USING btree (form_id);
--
--- TOC entry 3384 (class 1259 OID 17190)
--- Name: ind_menu_position; Type: INDEX; Schema: app; Owner: -
+-- TOC entry 3971 (class 1259 OID 19460)
+-- Name: fki_variable_module_id_fkey; Type: INDEX; Schema: app; Owner: -
--
-CREATE INDEX ind_menu_position ON app.menu USING btree ("position");
+CREATE INDEX fki_variable_module_id_fkey ON app.variable USING btree (module_id);
--
--- TOC entry 3443 (class 1259 OID 17191)
--- Name: ind_query_filter_position; Type: INDEX; Schema: app; Owner: -
+-- TOC entry 3915 (class 1259 OID 18955)
+-- Name: fki_widget_form_id_fkey; Type: INDEX; Schema: app; Owner: -
--
-CREATE INDEX ind_query_filter_position ON app.query_filter USING btree ("position");
+CREATE INDEX fki_widget_form_id_fkey ON app.widget USING btree (form_id);
--
--- TOC entry 3457 (class 1259 OID 17192)
--- Name: ind_query_join_position; Type: INDEX; Schema: app; Owner: -
+-- TOC entry 3557 (class 1259 OID 18564)
+-- Name: ind_caption_content; Type: INDEX; Schema: app; Owner: -
--
-CREATE INDEX ind_query_join_position ON app.query_join USING btree ("position");
+CREATE INDEX ind_caption_content ON app.caption USING btree (content);
--
--- TOC entry 3488 (class 1259 OID 17193)
--- Name: fki_data_log_value_attribute_id_fkey; Type: INDEX; Schema: instance; Owner: -
+-- TOC entry 3575 (class 1259 OID 17460)
+-- Name: ind_column_position; Type: INDEX; Schema: app; Owner: -
--
-CREATE INDEX fki_data_log_value_attribute_id_fkey ON instance.data_log_value USING btree (attribute_id);
+CREATE INDEX ind_column_position ON app."column" USING btree ("position");
--
--- TOC entry 3489 (class 1259 OID 17194)
--- Name: fki_data_log_value_attribute_id_nm_fkey; Type: INDEX; Schema: instance; Owner: -
+-- TOC entry 3591 (class 1259 OID 17461)
+-- Name: ind_field_calendar_ics; Type: INDEX; Schema: app; Owner: -
--
-CREATE INDEX fki_data_log_value_attribute_id_nm_fkey ON instance.data_log_value USING btree (attribute_id_nm);
+CREATE INDEX ind_field_calendar_ics ON app.field_calendar USING btree (ics);
--
--- TOC entry 3490 (class 1259 OID 17195)
--- Name: fki_data_log_value_data_log_id_fkey; Type: INDEX; Schema: instance; Owner: -
+-- TOC entry 3582 (class 1259 OID 17462)
+-- Name: ind_field_position; Type: INDEX; Schema: app; Owner: -
--
-CREATE INDEX fki_data_log_value_data_log_id_fkey ON instance.data_log_value USING btree (data_log_id);
+CREATE INDEX ind_field_position ON app.field USING btree ("position");
--
--- TOC entry 3495 (class 1259 OID 17196)
--- Name: fki_ldap_role_ldap_id_fkey; Type: INDEX; Schema: instance; Owner: -
+-- TOC entry 3649 (class 1259 OID 18892)
+-- Name: ind_js_function_name_form_unique; Type: INDEX; Schema: app; Owner: -
--
-CREATE INDEX fki_ldap_role_ldap_id_fkey ON instance.ldap_role USING btree (ldap_id);
+CREATE UNIQUE INDEX ind_js_function_name_form_unique ON app.js_function USING btree (module_id, name, form_id) WHERE (form_id IS NOT NULL);
--
--- TOC entry 3496 (class 1259 OID 17197)
--- Name: fki_ldap_role_role_id_fkey; Type: INDEX; Schema: instance; Owner: -
+-- TOC entry 3650 (class 1259 OID 18891)
+-- Name: ind_js_function_name_global_unique; Type: INDEX; Schema: app; Owner: -
--
-CREATE INDEX fki_ldap_role_role_id_fkey ON instance.ldap_role USING btree (role_id);
+CREATE UNIQUE INDEX ind_js_function_name_global_unique ON app.js_function USING btree (module_id, name) WHERE (form_id IS NULL);
--
--- TOC entry 3497 (class 1259 OID 18498)
--- Name: fki_log_node_fkey; Type: INDEX; Schema: instance; Owner: -
+-- TOC entry 3670 (class 1259 OID 17463)
+-- Name: ind_menu_position; Type: INDEX; Schema: app; Owner: -
--
-CREATE INDEX fki_log_node_fkey ON instance.log USING btree (node_id);
+CREATE INDEX ind_menu_position ON app.menu USING btree ("position");
--
--- TOC entry 3500 (class 1259 OID 17198)
--- Name: fki_login_ldap_id_fkey; Type: INDEX; Schema: instance; Owner: -
+-- TOC entry 3745 (class 1259 OID 17464)
+-- Name: ind_query_filter_position; Type: INDEX; Schema: app; Owner: -
--
-CREATE INDEX fki_login_ldap_id_fkey ON instance.login USING btree (ldap_id);
+CREATE INDEX ind_query_filter_position ON app.query_filter USING btree ("position");
--
--- TOC entry 3505 (class 1259 OID 17199)
--- Name: fki_login_role_login_id_fkey; Type: INDEX; Schema: instance; Owner: -
+-- TOC entry 3762 (class 1259 OID 17465)
+-- Name: ind_query_join_position; Type: INDEX; Schema: app; Owner: -
--
-CREATE INDEX fki_login_role_login_id_fkey ON instance.login_role USING btree (login_id);
+CREATE INDEX ind_query_join_position ON app.query_join USING btree ("position");
--
--- TOC entry 3506 (class 1259 OID 17200)
--- Name: fki_login_role_role_id_fkey; Type: INDEX; Schema: instance; Owner: -
+-- TOC entry 3972 (class 1259 OID 19463)
+-- Name: ind_variable_name_form_unique; Type: INDEX; Schema: app; Owner: -
--
-CREATE INDEX fki_login_role_role_id_fkey ON instance.login_role USING btree (role_id);
+CREATE UNIQUE INDEX ind_variable_name_form_unique ON app.variable USING btree (module_id, name, form_id) WHERE (form_id IS NOT NULL);
--
--- TOC entry 3509 (class 1259 OID 17201)
--- Name: fki_login_setting_language_code_fkey; Type: INDEX; Schema: instance; Owner: -
+-- TOC entry 3973 (class 1259 OID 19462)
+-- Name: ind_variable_name_global_unique; Type: INDEX; Schema: app; Owner: -
--
-CREATE INDEX fki_login_setting_language_code_fkey ON instance.login_setting USING btree (language_code);
+CREATE UNIQUE INDEX ind_variable_name_global_unique ON app.variable USING btree (module_id, name) WHERE (form_id IS NULL);
--
--- TOC entry 3534 (class 1259 OID 17202)
--- Name: fki_repo_module_meta_language_code_fkey; Type: INDEX; Schema: instance; Owner: -
+-- TOC entry 3930 (class 1259 OID 19143)
+-- Name: fki_caption_article_id_fkey; Type: INDEX; Schema: instance; Owner: -
--
-CREATE INDEX fki_repo_module_meta_language_code_fkey ON instance.repo_module_meta USING btree (language_code);
+CREATE INDEX fki_caption_article_id_fkey ON instance.caption USING btree (article_id);
--
--- TOC entry 3486 (class 1259 OID 17203)
--- Name: ind_data_log_date_change; Type: INDEX; Schema: instance; Owner: -
+-- TOC entry 3931 (class 1259 OID 19144)
+-- Name: fki_caption_attribute_id_fkey; Type: INDEX; Schema: instance; Owner: -
--
-CREATE INDEX ind_data_log_date_change ON instance.data_log USING btree (date_change DESC NULLS LAST);
+CREATE INDEX fki_caption_attribute_id_fkey ON instance.caption USING btree (attribute_id);
--
--- TOC entry 3487 (class 1259 OID 17204)
--- Name: ind_data_log_record_id_wofk; Type: INDEX; Schema: instance; Owner: -
+-- TOC entry 3932 (class 1259 OID 19340)
+-- Name: fki_caption_client_event_id_fkey; Type: INDEX; Schema: instance; Owner: -
--
-CREATE INDEX ind_data_log_record_id_wofk ON instance.data_log USING btree (record_id_wofk);
+CREATE INDEX fki_caption_client_event_id_fkey ON instance.caption USING btree (client_event_id);
--
--- TOC entry 3498 (class 1259 OID 17205)
--- Name: ind_log_date_milli_desc; Type: INDEX; Schema: instance; Owner: -
+-- TOC entry 3933 (class 1259 OID 19145)
+-- Name: fki_caption_column_id_fkey; Type: INDEX; Schema: instance; Owner: -
--
-CREATE INDEX ind_log_date_milli_desc ON instance.log USING btree (date_milli DESC NULLS LAST);
+CREATE INDEX fki_caption_column_id_fkey ON instance.caption USING btree (column_id);
--
--- TOC entry 3499 (class 1259 OID 17206)
--- Name: ind_log_message; Type: INDEX; Schema: instance; Owner: -
+-- TOC entry 3934 (class 1259 OID 19146)
+-- Name: fki_caption_field_id_fkey; Type: INDEX; Schema: instance; Owner: -
--
-CREATE INDEX ind_log_message ON instance.log USING gin (to_tsvector('english'::regconfig, message));
+CREATE INDEX fki_caption_field_id_fkey ON instance.caption USING btree (field_id);
--
--- TOC entry 3514 (class 1259 OID 17882)
--- Name: ind_mail_account_mode; Type: INDEX; Schema: instance; Owner: -
+-- TOC entry 3935 (class 1259 OID 19238)
+-- Name: fki_caption_form_action_id_fkey; Type: INDEX; Schema: instance; Owner: -
--
-CREATE INDEX ind_mail_account_mode ON instance.mail_account USING btree (mode DESC NULLS LAST);
+CREATE INDEX fki_caption_form_action_id_fkey ON instance.caption USING btree (form_action_id);
--
--- TOC entry 3515 (class 1259 OID 17208)
--- Name: ind_mail_account_name; Type: INDEX; Schema: instance; Owner: -
+-- TOC entry 3936 (class 1259 OID 19147)
+-- Name: fki_caption_form_id_fkey; Type: INDEX; Schema: instance; Owner: -
--
-CREATE UNIQUE INDEX ind_mail_account_name ON instance.mail_account USING btree (name DESC NULLS LAST);
+CREATE INDEX fki_caption_form_id_fkey ON instance.caption USING btree (form_id);
--
--- TOC entry 3518 (class 1259 OID 17209)
--- Name: ind_mail_spool_attempt_count; Type: INDEX; Schema: instance; Owner: -
+-- TOC entry 3937 (class 1259 OID 19148)
+-- Name: fki_caption_js_function_id_fkey; Type: INDEX; Schema: instance; Owner: -
--
-CREATE INDEX ind_mail_spool_attempt_count ON instance.mail_spool USING btree (attempt_count);
+CREATE INDEX fki_caption_js_function_id_fkey ON instance.caption USING btree (js_function_id);
--
--- TOC entry 3519 (class 1259 OID 17210)
--- Name: ind_mail_spool_attempt_date; Type: INDEX; Schema: instance; Owner: -
+-- TOC entry 3938 (class 1259 OID 19149)
+-- Name: fki_caption_login_form_id_fkey; Type: INDEX; Schema: instance; Owner: -
--
-CREATE INDEX ind_mail_spool_attempt_date ON instance.mail_spool USING btree (attempt_date);
+CREATE INDEX fki_caption_login_form_id_fkey ON instance.caption USING btree (login_form_id);
--
--- TOC entry 3520 (class 1259 OID 17211)
--- Name: ind_mail_spool_date; Type: INDEX; Schema: instance; Owner: -
+-- TOC entry 3939 (class 1259 OID 19150)
+-- Name: fki_caption_menu_id_fkey; Type: INDEX; Schema: instance; Owner: -
--
-CREATE INDEX ind_mail_spool_date ON instance.mail_spool USING btree (date DESC NULLS LAST);
+CREATE INDEX fki_caption_menu_id_fkey ON instance.caption USING btree (menu_id);
--
--- TOC entry 3521 (class 1259 OID 17212)
--- Name: ind_mail_spool_outgoing; Type: INDEX; Schema: instance; Owner: -
+-- TOC entry 3940 (class 1259 OID 19592)
+-- Name: fki_caption_menu_tab_id_fkey; Type: INDEX; Schema: instance; Owner: -
--
-CREATE INDEX ind_mail_spool_outgoing ON instance.mail_spool USING btree (outgoing DESC NULLS LAST);
+CREATE INDEX fki_caption_menu_tab_id_fkey ON instance.caption USING btree (menu_tab_id);
--
--- TOC entry 3600 (class 1259 OID 18490)
--- Name: fki_node_event_node_fkey; Type: INDEX; Schema: instance_cluster; Owner: -
+-- TOC entry 3941 (class 1259 OID 19151)
+-- Name: fki_caption_module_id_fkey; Type: INDEX; Schema: instance; Owner: -
--
-CREATE INDEX fki_node_event_node_fkey ON instance_cluster.node_event USING btree (node_id);
+CREATE INDEX fki_caption_module_id_fkey ON instance.caption USING btree (module_id);
--
--- TOC entry 3601 (class 1259 OID 18526)
--- Name: fki_node_schedule_node_id_fkey; Type: INDEX; Schema: instance_cluster; Owner: -
+-- TOC entry 3942 (class 1259 OID 19152)
+-- Name: fki_caption_pg_function_id_fkey; Type: INDEX; Schema: instance; Owner: -
--
-CREATE INDEX fki_node_schedule_node_id_fkey ON instance_cluster.node_schedule USING btree (node_id);
+CREATE INDEX fki_caption_pg_function_id_fkey ON instance.caption USING btree (pg_function_id);
--
--- TOC entry 3602 (class 1259 OID 18527)
--- Name: fki_node_schedule_schedule_id_fkey; Type: INDEX; Schema: instance_cluster; Owner: -
+-- TOC entry 3943 (class 1259 OID 19153)
+-- Name: fki_caption_query_choice_id_fkey; Type: INDEX; Schema: instance; Owner: -
--
-CREATE INDEX fki_node_schedule_schedule_id_fkey ON instance_cluster.node_schedule USING btree (schedule_id);
+CREATE INDEX fki_caption_query_choice_id_fkey ON instance.caption USING btree (query_choice_id);
--
--- TOC entry 3605 (class 2606 OID 17213)
--- Name: attribute attribute_icon_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 3944 (class 1259 OID 19154)
+-- Name: fki_caption_role_id_fkey; Type: INDEX; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.attribute
- ADD CONSTRAINT attribute_icon_id_fkey FOREIGN KEY (icon_id) REFERENCES app.icon(id) DEFERRABLE INITIALLY DEFERRED;
+CREATE INDEX fki_caption_role_id_fkey ON instance.caption USING btree (role_id);
--
--- TOC entry 3606 (class 2606 OID 17218)
--- Name: attribute attribute_relation_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 3945 (class 1259 OID 19155)
+-- Name: fki_caption_tab_id_fkey; Type: INDEX; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.attribute
- ADD CONSTRAINT attribute_relation_id_fkey FOREIGN KEY (relation_id) REFERENCES app.relation(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
+CREATE INDEX fki_caption_tab_id_fkey ON instance.caption USING btree (tab_id);
--
--- TOC entry 3607 (class 2606 OID 17223)
--- Name: attribute attribute_relationship_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 3946 (class 1259 OID 19156)
+-- Name: fki_caption_widget_id_fkey; Type: INDEX; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.attribute
- ADD CONSTRAINT attribute_relationship_id_fkey FOREIGN KEY (relationship_id) REFERENCES app.relation(id) DEFERRABLE INITIALLY DEFERRED NOT VALID;
+CREATE INDEX fki_caption_widget_id_fkey ON instance.caption USING btree (widget_id);
--
--- TOC entry 3608 (class 2606 OID 17228)
--- Name: caption caption_attribute_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 3802 (class 1259 OID 17466)
+-- Name: fki_data_log_value_attribute_id_fkey; Type: INDEX; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.caption
- ADD CONSTRAINT caption_attribute_id_fkey FOREIGN KEY (attribute_id) REFERENCES app.attribute(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
+CREATE INDEX fki_data_log_value_attribute_id_fkey ON instance.data_log_value USING btree (attribute_id);
--
--- TOC entry 3609 (class 2606 OID 17233)
--- Name: caption caption_column_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 3803 (class 1259 OID 17467)
+-- Name: fki_data_log_value_attribute_id_nm_fkey; Type: INDEX; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.caption
- ADD CONSTRAINT caption_column_id_fkey FOREIGN KEY (column_id) REFERENCES app."column"(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
+CREATE INDEX fki_data_log_value_attribute_id_nm_fkey ON instance.data_log_value USING btree (attribute_id_nm);
--
--- TOC entry 3610 (class 2606 OID 17238)
--- Name: caption caption_field_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 3804 (class 1259 OID 17468)
+-- Name: fki_data_log_value_data_log_id_fkey; Type: INDEX; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.caption
- ADD CONSTRAINT caption_field_id_fkey FOREIGN KEY (field_id) REFERENCES app.field(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
+CREATE INDEX fki_data_log_value_data_log_id_fkey ON instance.data_log_value USING btree (data_log_id);
--
--- TOC entry 3611 (class 2606 OID 17243)
--- Name: caption caption_form_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 3874 (class 1259 OID 18389)
+-- Name: fki_file_version_file_id_fkey; Type: INDEX; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.caption
- ADD CONSTRAINT caption_form_id_fkey FOREIGN KEY (form_id) REFERENCES app.form(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
+CREATE INDEX fki_file_version_file_id_fkey ON instance.file_version USING btree (file_id);
--
--- TOC entry 3618 (class 2606 OID 18141)
--- Name: caption caption_js_function_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 3875 (class 1259 OID 18387)
+-- Name: fki_file_version_login_id_fkey; Type: INDEX; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.caption
- ADD CONSTRAINT caption_js_function_id_fkey FOREIGN KEY (js_function_id) REFERENCES app.js_function(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+CREATE INDEX fki_file_version_login_id_fkey ON instance.file_version USING btree (login_id);
--
--- TOC entry 3617 (class 2606 OID 17921)
--- Name: caption caption_login_form_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 3805 (class 1259 OID 18723)
+-- Name: fki_ldap_login_template_id_fkey; Type: INDEX; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.caption
- ADD CONSTRAINT caption_login_form_id_fkey FOREIGN KEY (login_form_id) REFERENCES app.login_form(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+CREATE INDEX fki_ldap_login_template_id_fkey ON instance.ldap USING btree (login_template_id);
--
--- TOC entry 3612 (class 2606 OID 17248)
--- Name: caption caption_menu_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 3810 (class 1259 OID 17469)
+-- Name: fki_ldap_role_ldap_id_fkey; Type: INDEX; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.caption
- ADD CONSTRAINT caption_menu_id_fkey FOREIGN KEY (menu_id) REFERENCES app.menu(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
+CREATE INDEX fki_ldap_role_ldap_id_fkey ON instance.ldap_role USING btree (ldap_id);
--
--- TOC entry 3613 (class 2606 OID 17253)
--- Name: caption caption_module_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 3811 (class 1259 OID 17470)
+-- Name: fki_ldap_role_role_id_fkey; Type: INDEX; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.caption
- ADD CONSTRAINT caption_module_id_fkey FOREIGN KEY (module_id) REFERENCES app.module(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
+CREATE INDEX fki_ldap_role_role_id_fkey ON instance.ldap_role USING btree (role_id);
--
--- TOC entry 3616 (class 2606 OID 17859)
--- Name: caption caption_pg_function_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 3812 (class 1259 OID 17471)
+-- Name: fki_log_node_fkey; Type: INDEX; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.caption
- ADD CONSTRAINT caption_pg_function_id_fkey FOREIGN KEY (pg_function_id) REFERENCES app.pg_function(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+CREATE INDEX fki_log_node_fkey ON instance.log USING btree (node_id);
--
--- TOC entry 3614 (class 2606 OID 17258)
--- Name: caption caption_query_choice_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 3957 (class 1259 OID 19327)
+-- Name: fki_login_client_event_client_event_id_fkey; Type: INDEX; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.caption
- ADD CONSTRAINT caption_query_choice_id_fkey FOREIGN KEY (query_choice_id) REFERENCES app.query_choice(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+CREATE INDEX fki_login_client_event_client_event_id_fkey ON instance.login_client_event USING btree (client_event_id);
--
--- TOC entry 3615 (class 2606 OID 17263)
--- Name: caption caption_role_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 3958 (class 1259 OID 19326)
+-- Name: fki_login_client_event_login_id_fkey; Type: INDEX; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.caption
- ADD CONSTRAINT caption_role_id_fkey FOREIGN KEY (role_id) REFERENCES app.role(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
+CREATE INDEX fki_login_client_event_login_id_fkey ON instance.login_client_event USING btree (login_id);
--
--- TOC entry 3759 (class 2606 OID 18230)
--- Name: collection_consumer collection_consumer_collection_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 3984 (class 1259 OID 19628)
+-- Name: fki_login_favorite_form_id_fkey; Type: INDEX; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.collection_consumer
- ADD CONSTRAINT collection_consumer_collection_id_fkey FOREIGN KEY (collection_id) REFERENCES app.collection(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+CREATE INDEX fki_login_favorite_form_id_fkey ON instance.login_favorite USING btree (form_id);
--
--- TOC entry 3760 (class 2606 OID 18235)
--- Name: collection_consumer collection_consumer_column_id_display_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 3985 (class 1259 OID 19626)
+-- Name: fki_login_favorite_login_id_fkey; Type: INDEX; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.collection_consumer
- ADD CONSTRAINT collection_consumer_column_id_display_fkey FOREIGN KEY (column_id_display) REFERENCES app."column"(id) DEFERRABLE INITIALLY DEFERRED;
+CREATE INDEX fki_login_favorite_login_id_fkey ON instance.login_favorite USING btree (login_id);
--
--- TOC entry 3761 (class 2606 OID 18318)
--- Name: collection_consumer collection_consumer_field_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 3986 (class 1259 OID 19627)
+-- Name: fki_login_favorite_module_id_fkey; Type: INDEX; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.collection_consumer
- ADD CONSTRAINT collection_consumer_field_id_fkey FOREIGN KEY (field_id) REFERENCES app.field(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+CREATE INDEX fki_login_favorite_module_id_fkey ON instance.login_favorite USING btree (module_id);
--
--- TOC entry 3762 (class 2606 OID 18383)
--- Name: collection_consumer collection_consumer_menu_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 3815 (class 1259 OID 17472)
+-- Name: fki_login_ldap_id_fkey; Type: INDEX; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.collection_consumer
- ADD CONSTRAINT collection_consumer_menu_id_fkey FOREIGN KEY (menu_id) REFERENCES app.menu(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+CREATE INDEX fki_login_ldap_id_fkey ON instance.login USING btree (ldap_id);
--
--- TOC entry 3758 (class 2606 OID 18311)
--- Name: collection collection_icon_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 3989 (class 1259 OID 19652)
+-- Name: fki_login_options_field_id_fkey; Type: INDEX; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.collection
- ADD CONSTRAINT collection_icon_id_fkey FOREIGN KEY (icon_id) REFERENCES app.icon(id) DEFERRABLE INITIALLY DEFERRED;
+CREATE INDEX fki_login_options_field_id_fkey ON instance.login_options USING btree (field_id);
--
--- TOC entry 3757 (class 2606 OID 18187)
--- Name: collection collection_module_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 3990 (class 1259 OID 19651)
+-- Name: fki_login_options_login_favorite_id_fkey; Type: INDEX; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.collection
- ADD CONSTRAINT collection_module_id_fkey FOREIGN KEY (module_id) REFERENCES app.module(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+CREATE INDEX fki_login_options_login_favorite_id_fkey ON instance.login_options USING btree (login_favorite_id);
--
--- TOC entry 3619 (class 2606 OID 17268)
--- Name: column column_attribute_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 3991 (class 1259 OID 19650)
+-- Name: fki_login_options_login_id_fkey; Type: INDEX; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app."column"
- ADD CONSTRAINT column_attribute_id_fkey FOREIGN KEY (attribute_id) REFERENCES app.attribute(id) DEFERRABLE INITIALLY DEFERRED NOT VALID;
+CREATE INDEX fki_login_options_login_id_fkey ON instance.login_options USING btree (login_id);
--
--- TOC entry 3621 (class 2606 OID 18193)
--- Name: column column_collection_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 3820 (class 1259 OID 17473)
+-- Name: fki_login_role_login_id_fkey; Type: INDEX; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app."column"
- ADD CONSTRAINT column_collection_id_fkey FOREIGN KEY (collection_id) REFERENCES app.collection(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+CREATE INDEX fki_login_role_login_id_fkey ON instance.login_role USING btree (login_id);
--
--- TOC entry 3620 (class 2606 OID 17273)
--- Name: column column_field_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 3821 (class 1259 OID 17474)
+-- Name: fki_login_role_role_id_fkey; Type: INDEX; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app."column"
- ADD CONSTRAINT column_field_id_fkey FOREIGN KEY (field_id) REFERENCES app.field(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
+CREATE INDEX fki_login_role_role_id_fkey ON instance.login_role USING btree (role_id);
--
--- TOC entry 3625 (class 2606 OID 17283)
--- Name: field_button field_button_field_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 3900 (class 1259 OID 18767)
+-- Name: fki_login_search_dict_login_id_fkey; Type: INDEX; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.field_button
- ADD CONSTRAINT field_button_field_id_fkey FOREIGN KEY (field_id) REFERENCES app.field(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+CREATE INDEX fki_login_search_dict_login_id_fkey ON instance.login_search_dict USING btree (login_id);
--
--- TOC entry 3626 (class 2606 OID 18146)
--- Name: field_button field_button_js_function_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 3901 (class 1259 OID 18768)
+-- Name: fki_login_search_dict_login_template_id_fkey; Type: INDEX; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.field_button
- ADD CONSTRAINT field_button_js_function_id_fkey FOREIGN KEY (js_function_id) REFERENCES app.js_function(id) DEFERRABLE INITIALLY DEFERRED;
+CREATE INDEX fki_login_search_dict_login_template_id_fkey ON instance.login_search_dict USING btree (login_template_id);
--
--- TOC entry 3627 (class 2606 OID 17293)
--- Name: field_calendar field_calendar_attribute_id_color_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 3961 (class 1259 OID 19389)
+-- Name: fki_login_session_date; Type: INDEX; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.field_calendar
- ADD CONSTRAINT field_calendar_attribute_id_color_fkey FOREIGN KEY (attribute_id_color) REFERENCES app.attribute(id) DEFERRABLE INITIALLY DEFERRED;
+CREATE INDEX fki_login_session_date ON instance.login_session USING btree (date);
--
--- TOC entry 3628 (class 2606 OID 17298)
--- Name: field_calendar field_calendar_attribute_id_date0_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 3962 (class 1259 OID 19387)
+-- Name: fki_login_session_login_id_fkey; Type: INDEX; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.field_calendar
- ADD CONSTRAINT field_calendar_attribute_id_date0_fkey FOREIGN KEY (attribute_id_date0) REFERENCES app.attribute(id) DEFERRABLE INITIALLY DEFERRED;
+CREATE INDEX fki_login_session_login_id_fkey ON instance.login_session USING btree (login_id);
--
--- TOC entry 3629 (class 2606 OID 17303)
--- Name: field_calendar field_calendar_attribute_id_date1_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 3963 (class 1259 OID 19388)
+-- Name: fki_login_session_node_id_fkey; Type: INDEX; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.field_calendar
- ADD CONSTRAINT field_calendar_attribute_id_date1_fkey FOREIGN KEY (attribute_id_date1) REFERENCES app.attribute(id) DEFERRABLE INITIALLY DEFERRED;
+CREATE INDEX fki_login_session_node_id_fkey ON instance.login_session USING btree (node_id);
--
--- TOC entry 3733 (class 2606 OID 17939)
--- Name: field_chart field_chart_field_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 3824 (class 1259 OID 17475)
+-- Name: fki_login_setting_language_code_fkey; Type: INDEX; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.field_chart
- ADD CONSTRAINT field_chart_field_id_fkey FOREIGN KEY (field_id) REFERENCES app.field(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+CREATE INDEX fki_login_setting_language_code_fkey ON instance.login_setting USING btree (language_code);
--
--- TOC entry 3631 (class 2606 OID 17313)
--- Name: field_container field_container_field_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 3825 (class 1259 OID 18716)
+-- Name: fki_login_setting_login_id_fkey; Type: INDEX; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.field_container
- ADD CONSTRAINT field_container_field_id_fkey FOREIGN KEY (field_id) REFERENCES app.field(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
+CREATE INDEX fki_login_setting_login_id_fkey ON instance.login_setting USING btree (login_id);
--
--- TOC entry 3632 (class 2606 OID 17318)
--- Name: field_data field_data_attribute_id_alt_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 3826 (class 1259 OID 18717)
+-- Name: fki_login_setting_login_template_id_fkey; Type: INDEX; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.field_data
- ADD CONSTRAINT field_data_attribute_id_alt_fkey FOREIGN KEY (attribute_id_alt) REFERENCES app.attribute(id) DEFERRABLE INITIALLY DEFERRED;
+CREATE INDEX fki_login_setting_login_template_id_fkey ON instance.login_setting USING btree (login_template_id);
--
--- TOC entry 3633 (class 2606 OID 17323)
--- Name: field_data field_data_attribute_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 3922 (class 1259 OID 19013)
+-- Name: fki_login_widget_group_item_login_widget_group_id_fkey; Type: INDEX; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.field_data
- ADD CONSTRAINT field_data_attribute_id_fkey FOREIGN KEY (attribute_id) REFERENCES app.attribute(id) DEFERRABLE INITIALLY DEFERRED NOT VALID;
+CREATE INDEX fki_login_widget_group_item_login_widget_group_id_fkey ON instance.login_widget_group_item USING btree (login_widget_group_id);
--
--- TOC entry 3634 (class 2606 OID 17328)
--- Name: field_data field_data_field_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 3923 (class 1259 OID 19015)
+-- Name: fki_login_widget_group_item_module_id_fkey; Type: INDEX; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.field_data
- ADD CONSTRAINT field_data_field_id_fkey FOREIGN KEY (field_id) REFERENCES app.field(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
+CREATE INDEX fki_login_widget_group_item_module_id_fkey ON instance.login_widget_group_item USING btree (module_id);
--
--- TOC entry 3635 (class 2606 OID 18152)
--- Name: field_data field_data_js_function_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 3924 (class 1259 OID 19014)
+-- Name: fki_login_widget_group_item_widget_id_fkey; Type: INDEX; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.field_data
- ADD CONSTRAINT field_data_js_function_id_fkey FOREIGN KEY (js_function_id) REFERENCES app.js_function(id) DEFERRABLE INITIALLY DEFERRED;
+CREATE INDEX fki_login_widget_group_item_widget_id_fkey ON instance.login_widget_group_item USING btree (widget_id);
--
--- TOC entry 3636 (class 2606 OID 17333)
--- Name: field_data_relationship field_data_relationship_attribute_id_nm_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 3918 (class 1259 OID 18988)
+-- Name: fki_login_widget_group_login_id_fkey; Type: INDEX; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.field_data_relationship
- ADD CONSTRAINT field_data_relationship_attribute_id_nm_fkey FOREIGN KEY (attribute_id_nm) REFERENCES app.attribute(id) DEFERRABLE INITIALLY DEFERRED NOT VALID;
+CREATE INDEX fki_login_widget_group_login_id_fkey ON instance.login_widget_group USING btree (login_id);
--
--- TOC entry 3637 (class 2606 OID 17343)
--- Name: field_data_relationship field_data_relationship_field_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 3833 (class 1259 OID 19063)
+-- Name: fki_mail_account_oauth_client_id_fkey; Type: INDEX; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.field_data_relationship
- ADD CONSTRAINT field_data_relationship_field_id_fkey FOREIGN KEY (field_id) REFERENCES app.field(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
+CREATE INDEX fki_mail_account_oauth_client_id_fkey ON instance.mail_account USING btree (oauth_client_id);
--
--- TOC entry 3638 (class 2606 OID 17353)
--- Name: field_data_relationship_preset field_data_relationship_preset_field_id; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 3838 (class 1259 OID 18930)
+-- Name: fki_mail_spool_mail_account_id_fkey; Type: INDEX; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.field_data_relationship_preset
- ADD CONSTRAINT field_data_relationship_preset_field_id FOREIGN KEY (field_id) REFERENCES app.field(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+CREATE INDEX fki_mail_spool_mail_account_id_fkey ON instance.mail_spool USING btree (mail_account_id);
--
--- TOC entry 3639 (class 2606 OID 17358)
--- Name: field_data_relationship_preset field_data_relationship_preset_preset_id; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 3912 (class 1259 OID 18927)
+-- Name: fki_mail_traffic_mail_account_id_fkey; Type: INDEX; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.field_data_relationship_preset
- ADD CONSTRAINT field_data_relationship_preset_preset_id FOREIGN KEY (preset_id) REFERENCES app.preset(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+CREATE INDEX fki_mail_traffic_mail_account_id_fkey ON instance.mail_traffic USING btree (mail_account_id);
--
--- TOC entry 3622 (class 2606 OID 17363)
--- Name: field field_form_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 3855 (class 1259 OID 17476)
+-- Name: fki_repo_module_meta_language_code_fkey; Type: INDEX; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.field
- ADD CONSTRAINT field_form_id_fkey FOREIGN KEY (form_id) REFERENCES app.form(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
+CREATE INDEX fki_repo_module_meta_language_code_fkey ON instance.repo_module_meta USING btree (language_code);
--
--- TOC entry 3640 (class 2606 OID 17368)
--- Name: field_header field_header_field_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 3903 (class 1259 OID 18796)
+-- Name: fki_rest_spool_pg_function_id_callback_fkey; Type: INDEX; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.field_header
- ADD CONSTRAINT field_header_field_id_fkey FOREIGN KEY (field_id) REFERENCES app.field(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
+CREATE INDEX fki_rest_spool_pg_function_id_callback_fkey ON instance.rest_spool USING btree (pg_function_id_callback);
--
--- TOC entry 3623 (class 2606 OID 17373)
+-- TOC entry 3800 (class 1259 OID 17477)
+-- Name: ind_data_log_date_change; Type: INDEX; Schema: instance; Owner: -
+--
+
+CREATE INDEX ind_data_log_date_change ON instance.data_log USING btree (date_change DESC NULLS LAST);
+
+
+--
+-- TOC entry 3801 (class 1259 OID 17478)
+-- Name: ind_data_log_record_id_wofk; Type: INDEX; Schema: instance; Owner: -
+--
+
+CREATE INDEX ind_data_log_record_id_wofk ON instance.data_log USING btree (record_id_wofk);
+
+
+--
+-- TOC entry 3871 (class 1259 OID 18562)
+-- Name: ind_file_ref_counter; Type: INDEX; Schema: instance; Owner: -
+--
+
+CREATE INDEX ind_file_ref_counter ON instance.file USING btree (ref_counter);
+
+
+--
+-- TOC entry 3876 (class 1259 OID 18388)
+-- Name: ind_file_version_version; Type: INDEX; Schema: instance; Owner: -
+--
+
+CREATE INDEX ind_file_version_version ON instance.file_version USING btree (version);
+
+
+--
+-- TOC entry 3813 (class 1259 OID 17479)
+-- Name: ind_log_date_milli_desc; Type: INDEX; Schema: instance; Owner: -
+--
+
+CREATE INDEX ind_log_date_milli_desc ON instance.log USING btree (date_milli DESC NULLS LAST);
+
+
+--
+-- TOC entry 3814 (class 1259 OID 17480)
+-- Name: ind_log_message; Type: INDEX; Schema: instance; Owner: -
+--
+
+CREATE INDEX ind_log_message ON instance.log USING gin (to_tsvector('english'::regconfig, message));
+
+
+--
+-- TOC entry 3992 (class 1259 OID 19653)
+-- Name: ind_login_options_unique; Type: INDEX; Schema: instance; Owner: -
+--
+
+CREATE UNIQUE INDEX ind_login_options_unique ON instance.login_options USING btree (login_id, COALESCE(login_favorite_id, '00000000-0000-0000-0000-000000000000'::uuid), field_id, is_mobile);
+
+
+--
+-- TOC entry 3902 (class 1259 OID 18769)
+-- Name: ind_login_search_dict; Type: INDEX; Schema: instance; Owner: -
+--
+
+CREATE UNIQUE INDEX ind_login_search_dict ON instance.login_search_dict USING btree (login_id, login_template_id, name);
+
+
+--
+-- TOC entry 3925 (class 1259 OID 19016)
+-- Name: ind_login_widget_group_item_position; Type: INDEX; Schema: instance; Owner: -
+--
+
+CREATE INDEX ind_login_widget_group_item_position ON instance.login_widget_group_item USING btree ("position");
+
+
+--
+-- TOC entry 3919 (class 1259 OID 18989)
+-- Name: ind_login_widget_group_position; Type: INDEX; Schema: instance; Owner: -
+--
+
+CREATE INDEX ind_login_widget_group_position ON instance.login_widget_group USING btree ("position");
+
+
+--
+-- TOC entry 3834 (class 1259 OID 17481)
+-- Name: ind_mail_account_mode; Type: INDEX; Schema: instance; Owner: -
+--
+
+CREATE INDEX ind_mail_account_mode ON instance.mail_account USING btree (mode DESC NULLS LAST);
+
+
+--
+-- TOC entry 3835 (class 1259 OID 17482)
+-- Name: ind_mail_account_name; Type: INDEX; Schema: instance; Owner: -
+--
+
+CREATE UNIQUE INDEX ind_mail_account_name ON instance.mail_account USING btree (name DESC NULLS LAST);
+
+
+--
+-- TOC entry 3839 (class 1259 OID 17483)
+-- Name: ind_mail_spool_attempt_count; Type: INDEX; Schema: instance; Owner: -
+--
+
+CREATE INDEX ind_mail_spool_attempt_count ON instance.mail_spool USING btree (attempt_count);
+
+
+--
+-- TOC entry 3840 (class 1259 OID 17484)
+-- Name: ind_mail_spool_attempt_date; Type: INDEX; Schema: instance; Owner: -
+--
+
+CREATE INDEX ind_mail_spool_attempt_date ON instance.mail_spool USING btree (attempt_date);
+
+
+--
+-- TOC entry 3841 (class 1259 OID 17485)
+-- Name: ind_mail_spool_date; Type: INDEX; Schema: instance; Owner: -
+--
+
+CREATE INDEX ind_mail_spool_date ON instance.mail_spool USING btree (date DESC NULLS LAST);
+
+
+--
+-- TOC entry 3842 (class 1259 OID 17486)
+-- Name: ind_mail_spool_outgoing; Type: INDEX; Schema: instance; Owner: -
+--
+
+CREATE INDEX ind_mail_spool_outgoing ON instance.mail_spool USING btree (outgoing DESC NULLS LAST);
+
+
+--
+-- TOC entry 3913 (class 1259 OID 18928)
+-- Name: ind_mail_traffic_date; Type: INDEX; Schema: instance; Owner: -
+--
+
+CREATE INDEX ind_mail_traffic_date ON instance.mail_traffic USING btree (date DESC NULLS LAST);
+
+
+--
+-- TOC entry 3914 (class 1259 OID 18929)
+-- Name: ind_mail_traffic_outgoing; Type: INDEX; Schema: instance; Owner: -
+--
+
+CREATE INDEX ind_mail_traffic_outgoing ON instance.mail_traffic USING btree (outgoing);
+
+
+--
+-- TOC entry 3904 (class 1259 OID 18797)
+-- Name: ind_rest_spool_date_added; Type: INDEX; Schema: instance; Owner: -
+--
+
+CREATE INDEX ind_rest_spool_date_added ON instance.rest_spool USING btree (date_added);
+
+
+--
+-- TOC entry 3864 (class 1259 OID 17487)
+-- Name: fki_node_event_node_fkey; Type: INDEX; Schema: instance_cluster; Owner: -
+--
+
+CREATE INDEX fki_node_event_node_fkey ON instance_cluster.node_event USING btree (node_id);
+
+
+--
+-- TOC entry 3865 (class 1259 OID 17488)
+-- Name: fki_node_schedule_node_id_fkey; Type: INDEX; Schema: instance_cluster; Owner: -
+--
+
+CREATE INDEX fki_node_schedule_node_id_fkey ON instance_cluster.node_schedule USING btree (node_id);
+
+
+--
+-- TOC entry 3866 (class 1259 OID 17489)
+-- Name: fki_node_schedule_schedule_id_fkey; Type: INDEX; Schema: instance_cluster; Owner: -
+--
+
+CREATE INDEX fki_node_schedule_schedule_id_fkey ON instance_cluster.node_schedule USING btree (schedule_id);
+
+
+--
+-- TOC entry 4199 (class 2606 OID 18662)
+-- Name: api api_module_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.api
+ ADD CONSTRAINT api_module_id_fkey FOREIGN KEY (module_id) REFERENCES app.module(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4195 (class 2606 OID 18452)
+-- Name: article_form article_form_article_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.article_form
+ ADD CONSTRAINT article_form_article_id_fkey FOREIGN KEY (article_id) REFERENCES app.article(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4196 (class 2606 OID 18457)
+-- Name: article_form article_form_form_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.article_form
+ ADD CONSTRAINT article_form_form_id_fkey FOREIGN KEY (form_id) REFERENCES app.form(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4197 (class 2606 OID 18467)
+-- Name: article_help article_help_article_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.article_help
+ ADD CONSTRAINT article_help_article_id_fkey FOREIGN KEY (article_id) REFERENCES app.article(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4198 (class 2606 OID 18472)
+-- Name: article_help article_help_module_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.article_help
+ ADD CONSTRAINT article_help_module_id_fkey FOREIGN KEY (module_id) REFERENCES app.module(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4194 (class 2606 OID 18443)
+-- Name: article article_module_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.article
+ ADD CONSTRAINT article_module_id_fkey FOREIGN KEY (module_id) REFERENCES app.module(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 3993 (class 2606 OID 17490)
+-- Name: attribute attribute_icon_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.attribute
+ ADD CONSTRAINT attribute_icon_id_fkey FOREIGN KEY (icon_id) REFERENCES app.icon(id) DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 3994 (class 2606 OID 17495)
+-- Name: attribute attribute_relation_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.attribute
+ ADD CONSTRAINT attribute_relation_id_fkey FOREIGN KEY (relation_id) REFERENCES app.relation(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
+
+
+--
+-- TOC entry 3995 (class 2606 OID 17500)
+-- Name: attribute attribute_relationship_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.attribute
+ ADD CONSTRAINT attribute_relationship_id_fkey FOREIGN KEY (relationship_id) REFERENCES app.relation(id) DEFERRABLE INITIALLY DEFERRED NOT VALID;
+
+
+--
+-- TOC entry 3996 (class 2606 OID 18479)
+-- Name: caption caption_article_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.caption
+ ADD CONSTRAINT caption_article_id_fkey FOREIGN KEY (article_id) REFERENCES app.article(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 3997 (class 2606 OID 17505)
+-- Name: caption caption_attribute_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.caption
+ ADD CONSTRAINT caption_attribute_id_fkey FOREIGN KEY (attribute_id) REFERENCES app.attribute(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
+
+
+--
+-- TOC entry 3998 (class 2606 OID 19329)
+-- Name: caption caption_client_event_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.caption
+ ADD CONSTRAINT caption_client_event_id_fkey FOREIGN KEY (client_event_id) REFERENCES app.client_event(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 3999 (class 2606 OID 17510)
+-- Name: caption caption_column_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.caption
+ ADD CONSTRAINT caption_column_id_fkey FOREIGN KEY (column_id) REFERENCES app."column"(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
+
+
+--
+-- TOC entry 4000 (class 2606 OID 17515)
+-- Name: caption caption_field_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.caption
+ ADD CONSTRAINT caption_field_id_fkey FOREIGN KEY (field_id) REFERENCES app.field(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
+
+
+--
+-- TOC entry 4001 (class 2606 OID 19227)
+-- Name: caption caption_form_action_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.caption
+ ADD CONSTRAINT caption_form_action_id_fkey FOREIGN KEY (form_action_id) REFERENCES app.form_action(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4002 (class 2606 OID 17520)
+-- Name: caption caption_form_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.caption
+ ADD CONSTRAINT caption_form_id_fkey FOREIGN KEY (form_id) REFERENCES app.form(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
+
+
+--
+-- TOC entry 4003 (class 2606 OID 17525)
+-- Name: caption caption_js_function_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.caption
+ ADD CONSTRAINT caption_js_function_id_fkey FOREIGN KEY (js_function_id) REFERENCES app.js_function(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4004 (class 2606 OID 17530)
+-- Name: caption caption_login_form_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.caption
+ ADD CONSTRAINT caption_login_form_id_fkey FOREIGN KEY (login_form_id) REFERENCES app.login_form(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4005 (class 2606 OID 17535)
+-- Name: caption caption_menu_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.caption
+ ADD CONSTRAINT caption_menu_id_fkey FOREIGN KEY (menu_id) REFERENCES app.menu(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
+
+
+--
+-- TOC entry 4006 (class 2606 OID 19581)
+-- Name: caption caption_menu_tab_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.caption
+ ADD CONSTRAINT caption_menu_tab_id_fkey FOREIGN KEY (menu_tab_id) REFERENCES app.menu_tab(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4007 (class 2606 OID 17540)
+-- Name: caption caption_module_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.caption
+ ADD CONSTRAINT caption_module_id_fkey FOREIGN KEY (module_id) REFERENCES app.module(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
+
+
+--
+-- TOC entry 4008 (class 2606 OID 17545)
+-- Name: caption caption_pg_function_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.caption
+ ADD CONSTRAINT caption_pg_function_id_fkey FOREIGN KEY (pg_function_id) REFERENCES app.pg_function(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4009 (class 2606 OID 17550)
+-- Name: caption caption_query_choice_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.caption
+ ADD CONSTRAINT caption_query_choice_id_fkey FOREIGN KEY (query_choice_id) REFERENCES app.query_choice(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4010 (class 2606 OID 17555)
+-- Name: caption caption_role_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.caption
+ ADD CONSTRAINT caption_role_id_fkey FOREIGN KEY (role_id) REFERENCES app.role(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
+
+
+--
+-- TOC entry 4011 (class 2606 OID 18423)
+-- Name: caption caption_tab_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.caption
+ ADD CONSTRAINT caption_tab_id_fkey FOREIGN KEY (tab_id) REFERENCES app.tab(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4012 (class 2606 OID 18956)
+-- Name: caption caption_widget_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.caption
+ ADD CONSTRAINT caption_widget_id_fkey FOREIGN KEY (widget_id) REFERENCES app.widget(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4233 (class 2606 OID 19293)
+-- Name: client_event client_event_js_function_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.client_event
+ ADD CONSTRAINT client_event_js_function_id_fkey FOREIGN KEY (js_function_id) REFERENCES app.js_function(id) DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4234 (class 2606 OID 19303)
+-- Name: client_event client_event_module_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.client_event
+ ADD CONSTRAINT client_event_module_id_fkey FOREIGN KEY (module_id) REFERENCES app.module(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4235 (class 2606 OID 19298)
+-- Name: client_event client_event_pg_function_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.client_event
+ ADD CONSTRAINT client_event_pg_function_id_fkey FOREIGN KEY (pg_function_id) REFERENCES app.pg_function(id) DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4015 (class 2606 OID 17560)
+-- Name: collection_consumer collection_consumer_collection_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.collection_consumer
+ ADD CONSTRAINT collection_consumer_collection_id_fkey FOREIGN KEY (collection_id) REFERENCES app.collection(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4016 (class 2606 OID 17565)
+-- Name: collection_consumer collection_consumer_column_id_display_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.collection_consumer
+ ADD CONSTRAINT collection_consumer_column_id_display_fkey FOREIGN KEY (column_id_display) REFERENCES app."column"(id) DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4017 (class 2606 OID 17570)
+-- Name: collection_consumer collection_consumer_field_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.collection_consumer
+ ADD CONSTRAINT collection_consumer_field_id_fkey FOREIGN KEY (field_id) REFERENCES app.field(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4018 (class 2606 OID 17575)
+-- Name: collection_consumer collection_consumer_menu_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.collection_consumer
+ ADD CONSTRAINT collection_consumer_menu_id_fkey FOREIGN KEY (menu_id) REFERENCES app.menu(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4019 (class 2606 OID 18963)
+-- Name: collection_consumer collection_consumer_widget_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.collection_consumer
+ ADD CONSTRAINT collection_consumer_widget_id_fkey FOREIGN KEY (widget_id) REFERENCES app.widget(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4013 (class 2606 OID 17580)
+-- Name: collection collection_icon_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.collection
+ ADD CONSTRAINT collection_icon_id_fkey FOREIGN KEY (icon_id) REFERENCES app.icon(id) DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4014 (class 2606 OID 17585)
+-- Name: collection collection_module_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.collection
+ ADD CONSTRAINT collection_module_id_fkey FOREIGN KEY (module_id) REFERENCES app.module(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4020 (class 2606 OID 18674)
+-- Name: column column_api_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app."column"
+ ADD CONSTRAINT column_api_id_fkey FOREIGN KEY (api_id) REFERENCES app.api(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4021 (class 2606 OID 17590)
+-- Name: column column_attribute_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app."column"
+ ADD CONSTRAINT column_attribute_id_fkey FOREIGN KEY (attribute_id) REFERENCES app.attribute(id) DEFERRABLE INITIALLY DEFERRED NOT VALID;
+
+
+--
+-- TOC entry 4022 (class 2606 OID 17595)
+-- Name: column column_collection_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app."column"
+ ADD CONSTRAINT column_collection_id_fkey FOREIGN KEY (collection_id) REFERENCES app.collection(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4023 (class 2606 OID 17600)
+-- Name: column column_field_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app."column"
+ ADD CONSTRAINT column_field_id_fkey FOREIGN KEY (field_id) REFERENCES app.field(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
+
+
+--
+-- TOC entry 4028 (class 2606 OID 17605)
+-- Name: field_button field_button_field_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.field_button
+ ADD CONSTRAINT field_button_field_id_fkey FOREIGN KEY (field_id) REFERENCES app.field(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4029 (class 2606 OID 17610)
+-- Name: field_button field_button_js_function_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.field_button
+ ADD CONSTRAINT field_button_js_function_id_fkey FOREIGN KEY (js_function_id) REFERENCES app.js_function(id) DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4030 (class 2606 OID 17615)
+-- Name: field_calendar field_calendar_attribute_id_color_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.field_calendar
+ ADD CONSTRAINT field_calendar_attribute_id_color_fkey FOREIGN KEY (attribute_id_color) REFERENCES app.attribute(id) DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4031 (class 2606 OID 17620)
+-- Name: field_calendar field_calendar_attribute_id_date0_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.field_calendar
+ ADD CONSTRAINT field_calendar_attribute_id_date0_fkey FOREIGN KEY (attribute_id_date0) REFERENCES app.attribute(id) DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4032 (class 2606 OID 17625)
+-- Name: field_calendar field_calendar_attribute_id_date1_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.field_calendar
+ ADD CONSTRAINT field_calendar_attribute_id_date1_fkey FOREIGN KEY (attribute_id_date1) REFERENCES app.attribute(id) DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4034 (class 2606 OID 17630)
+-- Name: field_chart field_chart_field_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.field_chart
+ ADD CONSTRAINT field_chart_field_id_fkey FOREIGN KEY (field_id) REFERENCES app.field(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4035 (class 2606 OID 17635)
+-- Name: field_container field_container_field_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.field_container
+ ADD CONSTRAINT field_container_field_id_fkey FOREIGN KEY (field_id) REFERENCES app.field(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
+
+
+--
+-- TOC entry 4036 (class 2606 OID 17640)
+-- Name: field_data field_data_attribute_id_alt_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.field_data
+ ADD CONSTRAINT field_data_attribute_id_alt_fkey FOREIGN KEY (attribute_id_alt) REFERENCES app.attribute(id) DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4037 (class 2606 OID 17645)
+-- Name: field_data field_data_attribute_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.field_data
+ ADD CONSTRAINT field_data_attribute_id_fkey FOREIGN KEY (attribute_id) REFERENCES app.attribute(id) DEFERRABLE INITIALLY DEFERRED NOT VALID;
+
+
+--
+-- TOC entry 4038 (class 2606 OID 17650)
+-- Name: field_data field_data_field_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.field_data
+ ADD CONSTRAINT field_data_field_id_fkey FOREIGN KEY (field_id) REFERENCES app.field(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
+
+
+--
+-- TOC entry 4039 (class 2606 OID 17655)
+-- Name: field_data field_data_js_function_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.field_data
+ ADD CONSTRAINT field_data_js_function_id_fkey FOREIGN KEY (js_function_id) REFERENCES app.js_function(id) DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4040 (class 2606 OID 17660)
+-- Name: field_data_relationship field_data_relationship_attribute_id_nm_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.field_data_relationship
+ ADD CONSTRAINT field_data_relationship_attribute_id_nm_fkey FOREIGN KEY (attribute_id_nm) REFERENCES app.attribute(id) DEFERRABLE INITIALLY DEFERRED NOT VALID;
+
+
+--
+-- TOC entry 4041 (class 2606 OID 17665)
+-- Name: field_data_relationship field_data_relationship_field_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.field_data_relationship
+ ADD CONSTRAINT field_data_relationship_field_id_fkey FOREIGN KEY (field_id) REFERENCES app.field(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
+
+
+--
+-- TOC entry 4042 (class 2606 OID 17670)
+-- Name: field_data_relationship_preset field_data_relationship_preset_field_id; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.field_data_relationship_preset
+ ADD CONSTRAINT field_data_relationship_preset_field_id FOREIGN KEY (field_id) REFERENCES app.field(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4043 (class 2606 OID 17675)
+-- Name: field_data_relationship_preset field_data_relationship_preset_preset_id; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.field_data_relationship_preset
+ ADD CONSTRAINT field_data_relationship_preset_preset_id FOREIGN KEY (preset_id) REFERENCES app.preset(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4024 (class 2606 OID 17680)
+-- Name: field field_form_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.field
+ ADD CONSTRAINT field_form_id_fkey FOREIGN KEY (form_id) REFERENCES app.form(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
+
+
+--
+-- TOC entry 4044 (class 2606 OID 17685)
+-- Name: field_header field_header_field_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.field_header
+ ADD CONSTRAINT field_header_field_id_fkey FOREIGN KEY (field_id) REFERENCES app.field(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
+
+
+--
+-- TOC entry 4025 (class 2606 OID 17690)
-- Name: field field_icon_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.field
- ADD CONSTRAINT field_icon_id_fkey FOREIGN KEY (icon_id) REFERENCES app.icon(id) DEFERRABLE INITIALLY DEFERRED NOT VALID;
+ALTER TABLE ONLY app.field
+ ADD CONSTRAINT field_icon_id_fkey FOREIGN KEY (icon_id) REFERENCES app.icon(id) DEFERRABLE INITIALLY DEFERRED NOT VALID;
+
+
+--
+-- TOC entry 4033 (class 2606 OID 17695)
+-- Name: field_calendar field_id; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.field_calendar
+ ADD CONSTRAINT field_id FOREIGN KEY (field_id) REFERENCES app.field(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4204 (class 2606 OID 18885)
+-- Name: field_kanban field_kanban_attribute_id_sort_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.field_kanban
+ ADD CONSTRAINT field_kanban_attribute_id_sort_fkey FOREIGN KEY (attribute_id_sort) REFERENCES app.attribute(id) ON UPDATE SET NULL ON DELETE SET NULL DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4205 (class 2606 OID 18880)
+-- Name: field_kanban field_kanban_field_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.field_kanban
+ ADD CONSTRAINT field_kanban_field_id_fkey FOREIGN KEY (field_id) REFERENCES app.field(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4045 (class 2606 OID 17700)
+-- Name: field_list field_list_field_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.field_list
+ ADD CONSTRAINT field_list_field_id_fkey FOREIGN KEY (field_id) REFERENCES app.field(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
+
+
+--
+-- TOC entry 4026 (class 2606 OID 17705)
+-- Name: field field_parent_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.field
+ ADD CONSTRAINT field_parent_id_fkey FOREIGN KEY (parent_id) REFERENCES app.field(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
+
+
+--
+-- TOC entry 4244 (class 2606 OID 19497)
+-- Name: field_variable field_variable_field_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.field_variable
+ ADD CONSTRAINT field_variable_field_id_fkey FOREIGN KEY (field_id) REFERENCES app.field(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4245 (class 2606 OID 19502)
+-- Name: field_variable field_variable_js_function_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.field_variable
+ ADD CONSTRAINT field_variable_js_function_id_fkey FOREIGN KEY (js_function_id) REFERENCES app.js_function(id) DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4246 (class 2606 OID 19492)
+-- Name: field_variable field_variable_variable_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.field_variable
+ ADD CONSTRAINT field_variable_variable_id_fkey FOREIGN KEY (variable_id) REFERENCES app.variable(id) DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4230 (class 2606 OID 19208)
+-- Name: form_action form_action_form_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.form_action
+ ADD CONSTRAINT form_action_form_id_fkey FOREIGN KEY (form_id) REFERENCES app.form(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4231 (class 2606 OID 19213)
+-- Name: form_action form_action_icon_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.form_action
+ ADD CONSTRAINT form_action_icon_id_fkey FOREIGN KEY (icon_id) REFERENCES app.icon(id) DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4232 (class 2606 OID 19218)
+-- Name: form_action form_action_js_function_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.form_action
+ ADD CONSTRAINT form_action_js_function_id_fkey FOREIGN KEY (js_function_id) REFERENCES app.js_function(id) DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4046 (class 2606 OID 18897)
+-- Name: form form_field_id_focus_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.form
+ ADD CONSTRAINT form_field_id_focus_fkey FOREIGN KEY (field_id_focus) REFERENCES app.field(id) ON UPDATE SET NULL ON DELETE SET NULL DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4050 (class 2606 OID 17710)
+-- Name: form_function form_function_form_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.form_function
+ ADD CONSTRAINT form_function_form_id_fkey FOREIGN KEY (form_id) REFERENCES app.form(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4051 (class 2606 OID 17715)
+-- Name: form_function form_function_js_function_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.form_function
+ ADD CONSTRAINT form_function_js_function_id_fkey FOREIGN KEY (js_function_id) REFERENCES app.js_function(id) DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4047 (class 2606 OID 17720)
+-- Name: form form_icon_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.form
+ ADD CONSTRAINT form_icon_id_fkey FOREIGN KEY (icon_id) REFERENCES app.icon(id) DEFERRABLE INITIALLY DEFERRED NOT VALID;
+
+
+--
+-- TOC entry 4048 (class 2606 OID 17725)
+-- Name: form form_module_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.form
+ ADD CONSTRAINT form_module_id_fkey FOREIGN KEY (module_id) REFERENCES app.module(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
+
+
+--
+-- TOC entry 4049 (class 2606 OID 17730)
+-- Name: form form_preset_id_open_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.form
+ ADD CONSTRAINT form_preset_id_open_fkey FOREIGN KEY (preset_id_open) REFERENCES app.preset(id) DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4053 (class 2606 OID 17735)
+-- Name: form_state_condition form_state_condition_form_state_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.form_state_condition
+ ADD CONSTRAINT form_state_condition_form_state_id_fkey FOREIGN KEY (form_state_id) REFERENCES app.form_state(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4054 (class 2606 OID 17740)
+-- Name: form_state_condition_side form_state_condition_side_collection_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.form_state_condition_side
+ ADD CONSTRAINT form_state_condition_side_collection_id_fkey FOREIGN KEY (collection_id) REFERENCES app.collection(id) DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4055 (class 2606 OID 17745)
+-- Name: form_state_condition_side form_state_condition_side_column_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.form_state_condition_side
+ ADD CONSTRAINT form_state_condition_side_column_id_fkey FOREIGN KEY (column_id) REFERENCES app."column"(id) DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4056 (class 2606 OID 17750)
+-- Name: form_state_condition_side form_state_condition_side_field_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.form_state_condition_side
+ ADD CONSTRAINT form_state_condition_side_field_id_fkey FOREIGN KEY (field_id) REFERENCES app.field(id) DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4057 (class 2606 OID 17755)
+-- Name: form_state_condition_side form_state_condition_side_form_state_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.form_state_condition_side
+ ADD CONSTRAINT form_state_condition_side_form_state_id_fkey FOREIGN KEY (form_state_id) REFERENCES app.form_state(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4058 (class 2606 OID 17760)
+-- Name: form_state_condition_side form_state_condition_side_form_state_id_form_state_con_pos_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.form_state_condition_side
+ ADD CONSTRAINT form_state_condition_side_form_state_id_form_state_con_pos_fkey FOREIGN KEY (form_state_condition_position, form_state_id) REFERENCES app.form_state_condition("position", form_state_id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4059 (class 2606 OID 19598)
+-- Name: form_state_condition_side form_state_condition_side_form_state_id_result_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.form_state_condition_side
+ ADD CONSTRAINT form_state_condition_side_form_state_id_result_fkey FOREIGN KEY (form_state_id_result) REFERENCES app.form_state(id) DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4060 (class 2606 OID 17765)
+-- Name: form_state_condition_side form_state_condition_side_preset_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.form_state_condition_side
+ ADD CONSTRAINT form_state_condition_side_preset_id_fkey FOREIGN KEY (preset_id) REFERENCES app.preset(id) DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4061 (class 2606 OID 17770)
+-- Name: form_state_condition_side form_state_condition_side_role_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.form_state_condition_side
+ ADD CONSTRAINT form_state_condition_side_role_id_fkey FOREIGN KEY (role_id) REFERENCES app.role(id) DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4062 (class 2606 OID 19470)
+-- Name: form_state_condition_side form_state_condition_side_variable_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.form_state_condition_side
+ ADD CONSTRAINT form_state_condition_side_variable_id_fkey FOREIGN KEY (variable_id) REFERENCES app.variable(id) DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4063 (class 2606 OID 17775)
+-- Name: form_state_effect form_state_effect_field_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.form_state_effect
+ ADD CONSTRAINT form_state_effect_field_id_fkey FOREIGN KEY (field_id) REFERENCES app.field(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4064 (class 2606 OID 19239)
+-- Name: form_state_effect form_state_effect_form_action_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.form_state_effect
+ ADD CONSTRAINT form_state_effect_form_action_id_fkey FOREIGN KEY (form_action_id) REFERENCES app.form_action(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4065 (class 2606 OID 17780)
+-- Name: form_state_effect form_state_effect_form_state_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.form_state_effect
+ ADD CONSTRAINT form_state_effect_form_state_id_fkey FOREIGN KEY (form_state_id) REFERENCES app.form_state(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4066 (class 2606 OID 18429)
+-- Name: form_state_effect form_state_effect_tab_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.form_state_effect
+ ADD CONSTRAINT form_state_effect_tab_id_fkey FOREIGN KEY (tab_id) REFERENCES app.tab(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4052 (class 2606 OID 17785)
+-- Name: form_state form_state_form_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.form_state
+ ADD CONSTRAINT form_state_form_id_fkey FOREIGN KEY (form_id) REFERENCES app.form(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4067 (class 2606 OID 17790)
+-- Name: icon icon_module_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.icon
+ ADD CONSTRAINT icon_module_id_fkey FOREIGN KEY (module_id) REFERENCES app.module(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4070 (class 2606 OID 17795)
+-- Name: js_function_depends js_function_depends_collection_id_on_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.js_function_depends
+ ADD CONSTRAINT js_function_depends_collection_id_on_fkey FOREIGN KEY (collection_id_on) REFERENCES app.collection(id) DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4071 (class 2606 OID 17800)
+-- Name: js_function_depends js_function_depends_field_id_on_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.js_function_depends
+ ADD CONSTRAINT js_function_depends_field_id_on_fkey FOREIGN KEY (field_id_on) REFERENCES app.field(id) DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4072 (class 2606 OID 17805)
+-- Name: js_function_depends js_function_depends_form_id_on_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.js_function_depends
+ ADD CONSTRAINT js_function_depends_form_id_on_fkey FOREIGN KEY (form_id_on) REFERENCES app.form(id) DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4073 (class 2606 OID 17810)
+-- Name: js_function_depends js_function_depends_js_function_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.js_function_depends
+ ADD CONSTRAINT js_function_depends_js_function_id_fkey FOREIGN KEY (js_function_id) REFERENCES app.js_function(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4074 (class 2606 OID 17815)
+-- Name: js_function_depends js_function_depends_js_function_id_on_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.js_function_depends
+ ADD CONSTRAINT js_function_depends_js_function_id_on_fkey FOREIGN KEY (js_function_id_on) REFERENCES app.js_function(id) DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4075 (class 2606 OID 17820)
+-- Name: js_function_depends js_function_depends_pg_function_id_on_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.js_function_depends
+ ADD CONSTRAINT js_function_depends_pg_function_id_on_fkey FOREIGN KEY (pg_function_id_on) REFERENCES app.pg_function(id) DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4076 (class 2606 OID 17825)
+-- Name: js_function_depends js_function_depends_role_id_on_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.js_function_depends
+ ADD CONSTRAINT js_function_depends_role_id_on_fkey FOREIGN KEY (role_id_on) REFERENCES app.role(id) DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4077 (class 2606 OID 19464)
+-- Name: js_function_depends js_function_depends_variable_id_on_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.js_function_depends
+ ADD CONSTRAINT js_function_depends_variable_id_on_fkey FOREIGN KEY (variable_id_on) REFERENCES app.variable(id) DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4068 (class 2606 OID 19025)
+-- Name: js_function js_function_form_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.js_function
+ ADD CONSTRAINT js_function_form_id_fkey FOREIGN KEY (form_id) REFERENCES app.form(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4087 (class 2606 OID 19666)
+-- Name: module js_function_id_on_login_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.module
+ ADD CONSTRAINT js_function_id_on_login_fkey FOREIGN KEY (js_function_id_on_login) REFERENCES app.js_function(id) DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4069 (class 2606 OID 17835)
+-- Name: js_function js_function_module_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.js_function
+ ADD CONSTRAINT js_function_module_id_fkey FOREIGN KEY (module_id) REFERENCES app.module(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4078 (class 2606 OID 17840)
+-- Name: login_form login_form_attribute_id_login_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.login_form
+ ADD CONSTRAINT login_form_attribute_id_login_fkey FOREIGN KEY (attribute_id_login) REFERENCES app.attribute(id) DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4079 (class 2606 OID 17845)
+-- Name: login_form login_form_attribute_id_lookup_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.login_form
+ ADD CONSTRAINT login_form_attribute_id_lookup_fkey FOREIGN KEY (attribute_id_lookup) REFERENCES app.attribute(id) DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4080 (class 2606 OID 17850)
+-- Name: login_form login_form_form_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.login_form
+ ADD CONSTRAINT login_form_form_id_fkey FOREIGN KEY (form_id) REFERENCES app.form(id) DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4081 (class 2606 OID 17855)
+-- Name: login_form login_form_module_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.login_form
+ ADD CONSTRAINT login_form_module_id_fkey FOREIGN KEY (module_id) REFERENCES app.module(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4082 (class 2606 OID 17860)
+-- Name: menu menu_form_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.menu
+ ADD CONSTRAINT menu_form_id_fkey FOREIGN KEY (form_id) REFERENCES app.form(id) ON UPDATE SET NULL ON DELETE SET NULL DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4083 (class 2606 OID 17865)
+-- Name: menu menu_icon_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.menu
+ ADD CONSTRAINT menu_icon_id_fkey FOREIGN KEY (icon_id) REFERENCES app.icon(id) DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4084 (class 2606 OID 19593)
+-- Name: menu menu_menu_tab_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.menu
+ ADD CONSTRAINT menu_menu_tab_id_fkey FOREIGN KEY (menu_tab_id) REFERENCES app.menu_tab(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4085 (class 2606 OID 17870)
+-- Name: menu menu_module_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.menu
+ ADD CONSTRAINT menu_module_id_fkey FOREIGN KEY (module_id) REFERENCES app.module(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4086 (class 2606 OID 17875)
+-- Name: menu menu_parent_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.menu
+ ADD CONSTRAINT menu_parent_id_fkey FOREIGN KEY (parent_id) REFERENCES app.menu(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
+
+
+--
+-- TOC entry 4247 (class 2606 OID 19573)
+-- Name: menu_tab menu_tab_icon_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.menu_tab
+ ADD CONSTRAINT menu_tab_icon_id_fkey FOREIGN KEY (icon_id) REFERENCES app.icon(id) DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4248 (class 2606 OID 19568)
+-- Name: menu_tab menu_tab_module_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.menu_tab
+ ADD CONSTRAINT menu_tab_module_id_fkey FOREIGN KEY (module_id) REFERENCES app.module(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4094 (class 2606 OID 17880)
+-- Name: module_depends module_depends_module_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.module_depends
+ ADD CONSTRAINT module_depends_module_id_fkey FOREIGN KEY (module_id) REFERENCES app.module(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4095 (class 2606 OID 17885)
+-- Name: module_depends module_depends_module_id_on_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.module_depends
+ ADD CONSTRAINT module_depends_module_id_on_fkey FOREIGN KEY (module_id_on) REFERENCES app.module(id) DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3630 (class 2606 OID 17378)
--- Name: field_calendar field_id; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4088 (class 2606 OID 17890)
+-- Name: module module_form_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.field_calendar
- ADD CONSTRAINT field_id FOREIGN KEY (field_id) REFERENCES app.field(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY app.module
+ ADD CONSTRAINT module_form_id_fkey FOREIGN KEY (form_id) REFERENCES app.form(id) ON UPDATE SET NULL ON DELETE SET NULL DEFERRABLE INITIALLY DEFERRED NOT VALID;
--
--- TOC entry 3641 (class 2606 OID 17388)
--- Name: field_list field_list_field_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4089 (class 2606 OID 17895)
+-- Name: module module_icon_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.field_list
- ADD CONSTRAINT field_list_field_id_fkey FOREIGN KEY (field_id) REFERENCES app.field(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
+ALTER TABLE ONLY app.module
+ ADD CONSTRAINT module_icon_id_fkey FOREIGN KEY (icon_id) REFERENCES app.icon(id) DEFERRABLE INITIALLY DEFERRED NOT VALID;
--
--- TOC entry 3624 (class 2606 OID 17398)
--- Name: field field_parent_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4090 (class 2606 OID 18799)
+-- Name: module module_icon_id_pwa1_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.field
- ADD CONSTRAINT field_parent_id_fkey FOREIGN KEY (parent_id) REFERENCES app.field(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
+ALTER TABLE ONLY app.module
+ ADD CONSTRAINT module_icon_id_pwa1_fkey FOREIGN KEY (icon_id_pwa1) REFERENCES app.icon(id) ON UPDATE SET NULL ON DELETE SET NULL DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3755 (class 2606 OID 18170)
--- Name: form_function form_function_form_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4091 (class 2606 OID 18804)
+-- Name: module module_icon_id_pwa2_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.form_function
- ADD CONSTRAINT form_function_form_id_fkey FOREIGN KEY (form_id) REFERENCES app.form(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY app.module
+ ADD CONSTRAINT module_icon_id_pwa2_fkey FOREIGN KEY (icon_id_pwa2) REFERENCES app.icon(id) ON UPDATE SET NULL ON DELETE SET NULL DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3756 (class 2606 OID 18175)
--- Name: form_function form_function_js_function_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4096 (class 2606 OID 17900)
+-- Name: module_language module_language_module_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.form_function
- ADD CONSTRAINT form_function_js_function_id_fkey FOREIGN KEY (js_function_id) REFERENCES app.js_function(id) DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY app.module_language
+ ADD CONSTRAINT module_language_module_id_fkey FOREIGN KEY (module_id) REFERENCES app.module(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3642 (class 2606 OID 17403)
--- Name: form form_icon_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4092 (class 2606 OID 17905)
+-- Name: module module_parent_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.form
- ADD CONSTRAINT form_icon_id_fkey FOREIGN KEY (icon_id) REFERENCES app.icon(id) DEFERRABLE INITIALLY DEFERRED NOT VALID;
+ALTER TABLE ONLY app.module
+ ADD CONSTRAINT module_parent_id_fkey FOREIGN KEY (parent_id) REFERENCES app.module(id) DEFERRABLE INITIALLY DEFERRED NOT VALID;
--
--- TOC entry 3643 (class 2606 OID 17408)
--- Name: form form_module_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4097 (class 2606 OID 17910)
+-- Name: module_start_form module_start_form_form_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.form
- ADD CONSTRAINT form_module_id_fkey FOREIGN KEY (module_id) REFERENCES app.module(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
+ALTER TABLE ONLY app.module_start_form
+ ADD CONSTRAINT module_start_form_form_id_fkey FOREIGN KEY (form_id) REFERENCES app.form(id) DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3644 (class 2606 OID 17413)
--- Name: form form_preset_id_open_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4098 (class 2606 OID 17915)
+-- Name: module_start_form module_start_form_module_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.form
- ADD CONSTRAINT form_preset_id_open_fkey FOREIGN KEY (preset_id_open) REFERENCES app.preset(id) DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY app.module_start_form
+ ADD CONSTRAINT module_start_form_module_id_fkey FOREIGN KEY (module_id) REFERENCES app.module(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3646 (class 2606 OID 17428)
--- Name: form_state_condition form_state_condition_form_state_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4099 (class 2606 OID 17920)
+-- Name: module_start_form module_start_form_role_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.form_state_condition
- ADD CONSTRAINT form_state_condition_form_state_id_fkey FOREIGN KEY (form_state_id) REFERENCES app.form_state(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY app.module_start_form
+ ADD CONSTRAINT module_start_form_role_id_fkey FOREIGN KEY (role_id) REFERENCES app.role(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3763 (class 2606 OID 18269)
--- Name: form_state_condition_side form_state_condition_side_collection_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4100 (class 2606 OID 17925)
+-- Name: open_form open_form_attribute_id_apply_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.form_state_condition_side
- ADD CONSTRAINT form_state_condition_side_collection_id_fkey FOREIGN KEY (collection_id) REFERENCES app.collection(id) DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY app.open_form
+ ADD CONSTRAINT open_form_attribute_id_apply_fkey FOREIGN KEY (attribute_id_apply) REFERENCES app.attribute(id) DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3764 (class 2606 OID 18274)
--- Name: form_state_condition_side form_state_condition_side_column_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4101 (class 2606 OID 17930)
+-- Name: open_form open_form_collection_consumer_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.form_state_condition_side
- ADD CONSTRAINT form_state_condition_side_column_id_fkey FOREIGN KEY (column_id) REFERENCES app."column"(id) DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY app.open_form
+ ADD CONSTRAINT open_form_collection_consumer_id_fkey FOREIGN KEY (collection_consumer_id) REFERENCES app.collection_consumer(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3765 (class 2606 OID 18279)
--- Name: form_state_condition_side form_state_condition_side_field_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4102 (class 2606 OID 17935)
+-- Name: open_form open_form_column_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.form_state_condition_side
- ADD CONSTRAINT form_state_condition_side_field_id_fkey FOREIGN KEY (field_id) REFERENCES app.field(id) DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY app.open_form
+ ADD CONSTRAINT open_form_column_id_fkey FOREIGN KEY (column_id) REFERENCES app."column"(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3767 (class 2606 OID 18289)
--- Name: form_state_condition_side form_state_condition_side_form_state_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4103 (class 2606 OID 17940)
+-- Name: open_form open_form_field_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.form_state_condition_side
- ADD CONSTRAINT form_state_condition_side_form_state_id_fkey FOREIGN KEY (form_state_id) REFERENCES app.form_state(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY app.open_form
+ ADD CONSTRAINT open_form_field_id_fkey FOREIGN KEY (field_id) REFERENCES app.field(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3768 (class 2606 OID 18294)
--- Name: form_state_condition_side form_state_condition_side_form_state_id_form_state_con_pos_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4104 (class 2606 OID 17945)
+-- Name: open_form open_form_form_id_open_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.form_state_condition_side
- ADD CONSTRAINT form_state_condition_side_form_state_id_form_state_con_pos_fkey FOREIGN KEY (form_state_condition_position, form_state_id) REFERENCES app.form_state_condition("position", form_state_id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY app.open_form
+ ADD CONSTRAINT open_form_form_id_open_fkey FOREIGN KEY (form_id_open) REFERENCES app.form(id) DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3766 (class 2606 OID 18284)
--- Name: form_state_condition_side form_state_condition_side_preset_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4106 (class 2606 OID 17950)
+-- Name: pg_function_depends pg_function_depends_attribute_id_on_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.form_state_condition_side
- ADD CONSTRAINT form_state_condition_side_preset_id_fkey FOREIGN KEY (preset_id) REFERENCES app.preset(id) DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY app.pg_function_depends
+ ADD CONSTRAINT pg_function_depends_attribute_id_on_fkey FOREIGN KEY (attribute_id_on) REFERENCES app.attribute(id) DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3769 (class 2606 OID 18299)
--- Name: form_state_condition_side form_state_condition_side_role_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4107 (class 2606 OID 17955)
+-- Name: pg_function_depends pg_function_depends_module_id_on_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.form_state_condition_side
- ADD CONSTRAINT form_state_condition_side_role_id_fkey FOREIGN KEY (role_id) REFERENCES app.role(id) DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY app.pg_function_depends
+ ADD CONSTRAINT pg_function_depends_module_id_on_fkey FOREIGN KEY (module_id_on) REFERENCES app.module(id) DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3647 (class 2606 OID 17443)
--- Name: form_state_effect form_state_effect_field_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4108 (class 2606 OID 17960)
+-- Name: pg_function_depends pg_function_depends_pg_function_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.form_state_effect
- ADD CONSTRAINT form_state_effect_field_id_fkey FOREIGN KEY (field_id) REFERENCES app.field(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY app.pg_function_depends
+ ADD CONSTRAINT pg_function_depends_pg_function_id_fkey FOREIGN KEY (pg_function_id) REFERENCES app.pg_function(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3648 (class 2606 OID 17448)
--- Name: form_state_effect form_state_effect_form_state_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4109 (class 2606 OID 17965)
+-- Name: pg_function_depends pg_function_depends_pg_function_id_on_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.form_state_effect
- ADD CONSTRAINT form_state_effect_form_state_id_fkey FOREIGN KEY (form_state_id) REFERENCES app.form_state(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY app.pg_function_depends
+ ADD CONSTRAINT pg_function_depends_pg_function_id_on_fkey FOREIGN KEY (pg_function_id_on) REFERENCES app.pg_function(id) DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3645 (class 2606 OID 17453)
--- Name: form_state form_state_form_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4110 (class 2606 OID 17970)
+-- Name: pg_function_depends pg_function_depends_relation_id_on_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.form_state
- ADD CONSTRAINT form_state_form_id_fkey FOREIGN KEY (form_id) REFERENCES app.form(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY app.pg_function_depends
+ ADD CONSTRAINT pg_function_depends_relation_id_on_fkey FOREIGN KEY (relation_id_on) REFERENCES app.relation(id) DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3649 (class 2606 OID 17458)
--- Name: icon icon_module_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4093 (class 2606 OID 19406)
+-- Name: module pg_function_id_login_sync_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.icon
- ADD CONSTRAINT icon_module_id_fkey FOREIGN KEY (module_id) REFERENCES app.module(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY app.module
+ ADD CONSTRAINT pg_function_id_login_sync_fkey FOREIGN KEY (pg_function_id_login_sync) REFERENCES app.pg_function(id) DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3754 (class 2606 OID 18367)
--- Name: js_function_depends js_function_depends_collection_id_on_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4105 (class 2606 OID 17975)
+-- Name: pg_function pg_function_module_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.js_function_depends
- ADD CONSTRAINT js_function_depends_collection_id_on_fkey FOREIGN KEY (collection_id_on) REFERENCES app.collection(id) DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY app.pg_function
+ ADD CONSTRAINT pg_function_module_id_fkey FOREIGN KEY (module_id) REFERENCES app.module(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3748 (class 2606 OID 18101)
--- Name: js_function_depends js_function_depends_field_id_on_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4111 (class 2606 OID 17980)
+-- Name: pg_function_schedule pg_function_schedule_pg_function_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.js_function_depends
- ADD CONSTRAINT js_function_depends_field_id_on_fkey FOREIGN KEY (field_id_on) REFERENCES app.field(id) DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY app.pg_function_schedule
+ ADD CONSTRAINT pg_function_schedule_pg_function_id_fkey FOREIGN KEY (pg_function_id) REFERENCES app.pg_function(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3749 (class 2606 OID 18106)
--- Name: js_function_depends js_function_depends_form_id_on_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4114 (class 2606 OID 17985)
+-- Name: pg_index_attribute pg_index_attribute_attribute_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.js_function_depends
- ADD CONSTRAINT js_function_depends_form_id_on_fkey FOREIGN KEY (form_id_on) REFERENCES app.form(id) DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY app.pg_index_attribute
+ ADD CONSTRAINT pg_index_attribute_attribute_id_fkey FOREIGN KEY (attribute_id) REFERENCES app.attribute(id) DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3751 (class 2606 OID 18116)
--- Name: js_function_depends js_function_depends_js_function_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4112 (class 2606 OID 18748)
+-- Name: pg_index pg_index_attribute_id_dict_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.js_function_depends
- ADD CONSTRAINT js_function_depends_js_function_id_fkey FOREIGN KEY (js_function_id) REFERENCES app.js_function(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY app.pg_index
+ ADD CONSTRAINT pg_index_attribute_id_dict_fkey FOREIGN KEY (attribute_id_dict) REFERENCES app.attribute(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3752 (class 2606 OID 18121)
--- Name: js_function_depends js_function_depends_js_function_id_on_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4115 (class 2606 OID 17990)
+-- Name: pg_index_attribute pg_index_attribute_pg_index_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.js_function_depends
- ADD CONSTRAINT js_function_depends_js_function_id_on_fkey FOREIGN KEY (js_function_id_on) REFERENCES app.js_function(id) DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY app.pg_index_attribute
+ ADD CONSTRAINT pg_index_attribute_pg_index_id_fkey FOREIGN KEY (pg_index_id) REFERENCES app.pg_index(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3753 (class 2606 OID 18126)
--- Name: js_function_depends js_function_depends_pg_function_id_on_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4113 (class 2606 OID 17995)
+-- Name: pg_index pg_index_relation_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.js_function_depends
- ADD CONSTRAINT js_function_depends_pg_function_id_on_fkey FOREIGN KEY (pg_function_id_on) REFERENCES app.pg_function(id) DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY app.pg_index
+ ADD CONSTRAINT pg_index_relation_id_fkey FOREIGN KEY (relation_id) REFERENCES app.relation(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3750 (class 2606 OID 18111)
--- Name: js_function_depends js_function_depends_role_id_on_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4116 (class 2606 OID 19039)
+-- Name: pg_trigger pg_trigger_module_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.js_function_depends
- ADD CONSTRAINT js_function_depends_role_id_on_fkey FOREIGN KEY (role_id_on) REFERENCES app.role(id) DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY app.pg_trigger
+ ADD CONSTRAINT pg_trigger_module_id_fkey FOREIGN KEY (module_id) REFERENCES app.module(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3746 (class 2606 OID 18086)
--- Name: js_function js_function_form_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4117 (class 2606 OID 18000)
+-- Name: pg_trigger pg_trigger_pg_function_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.js_function
- ADD CONSTRAINT js_function_form_id_fkey FOREIGN KEY (form_id) REFERENCES app.form(id) DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY app.pg_trigger
+ ADD CONSTRAINT pg_trigger_pg_function_id_fkey FOREIGN KEY (pg_function_id) REFERENCES app.pg_function(id) DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3747 (class 2606 OID 18091)
--- Name: js_function js_function_module_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4118 (class 2606 OID 18005)
+-- Name: pg_trigger pg_trigger_relation_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.js_function
- ADD CONSTRAINT js_function_module_id_fkey FOREIGN KEY (module_id) REFERENCES app.module(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY app.pg_trigger
+ ADD CONSTRAINT pg_trigger_relation_id_fkey FOREIGN KEY (relation_id) REFERENCES app.relation(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3729 (class 2606 OID 17900)
--- Name: login_form login_form_attribute_id_login_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4150 (class 2606 OID 18010)
+-- Name: relation_policy policy_pg_function_id_excl_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.login_form
- ADD CONSTRAINT login_form_attribute_id_login_fkey FOREIGN KEY (attribute_id_login) REFERENCES app.attribute(id) DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY app.relation_policy
+ ADD CONSTRAINT policy_pg_function_id_excl_fkey FOREIGN KEY (pg_function_id_excl) REFERENCES app.pg_function(id) DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3730 (class 2606 OID 17905)
--- Name: login_form login_form_attribute_id_lookup_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4151 (class 2606 OID 18015)
+-- Name: relation_policy policy_pg_function_id_incl_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.login_form
- ADD CONSTRAINT login_form_attribute_id_lookup_fkey FOREIGN KEY (attribute_id_lookup) REFERENCES app.attribute(id) DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY app.relation_policy
+ ADD CONSTRAINT policy_pg_function_id_incl_fkey FOREIGN KEY (pg_function_id_incl) REFERENCES app.pg_function(id) DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3731 (class 2606 OID 17910)
--- Name: login_form login_form_form_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4152 (class 2606 OID 18020)
+-- Name: relation_policy policy_relation_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.login_form
- ADD CONSTRAINT login_form_form_id_fkey FOREIGN KEY (form_id) REFERENCES app.form(id) DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY app.relation_policy
+ ADD CONSTRAINT policy_relation_id_fkey FOREIGN KEY (relation_id) REFERENCES app.relation(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3732 (class 2606 OID 17915)
--- Name: login_form login_form_module_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4153 (class 2606 OID 18025)
+-- Name: relation_policy policy_role_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.login_form
- ADD CONSTRAINT login_form_module_id_fkey FOREIGN KEY (module_id) REFERENCES app.module(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY app.relation_policy
+ ADD CONSTRAINT policy_role_id_fkey FOREIGN KEY (role_id) REFERENCES app.role(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3650 (class 2606 OID 17463)
--- Name: menu menu_form_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4119 (class 2606 OID 18030)
+-- Name: preset preset_relation_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.menu
- ADD CONSTRAINT menu_form_id_fkey FOREIGN KEY (form_id) REFERENCES app.form(id) ON UPDATE SET NULL ON DELETE SET NULL DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY app.preset
+ ADD CONSTRAINT preset_relation_id_fkey FOREIGN KEY (relation_id) REFERENCES app.relation(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
--
--- TOC entry 3651 (class 2606 OID 17468)
--- Name: menu menu_icon_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4120 (class 2606 OID 18035)
+-- Name: preset_value preset_value_attribute_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.menu
- ADD CONSTRAINT menu_icon_id_fkey FOREIGN KEY (icon_id) REFERENCES app.icon(id) DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY app.preset_value
+ ADD CONSTRAINT preset_value_attribute_id_fkey FOREIGN KEY (attribute_id) REFERENCES app.attribute(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
+
+
+--
+-- TOC entry 4121 (class 2606 OID 18040)
+-- Name: preset_value preset_value_preset_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.preset_value
+ ADD CONSTRAINT preset_value_preset_id_fkey FOREIGN KEY (preset_id) REFERENCES app.preset(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
+
+
+--
+-- TOC entry 4122 (class 2606 OID 18045)
+-- Name: preset_value preset_value_preset_id_refer_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.preset_value
+ ADD CONSTRAINT preset_value_preset_id_refer_fkey FOREIGN KEY (preset_id_refer) REFERENCES app.preset(id) DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4123 (class 2606 OID 18667)
+-- Name: query query_api_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.query
+ ADD CONSTRAINT query_api_id_fkey FOREIGN KEY (api_id) REFERENCES app.api(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4130 (class 2606 OID 18050)
+-- Name: query_choice query_choice_query_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.query_choice
+ ADD CONSTRAINT query_choice_query_id_fkey FOREIGN KEY (query_id) REFERENCES app.query(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4124 (class 2606 OID 18055)
+-- Name: query query_collection_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.query
+ ADD CONSTRAINT query_collection_id_fkey FOREIGN KEY (collection_id) REFERENCES app.collection(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4125 (class 2606 OID 18060)
+-- Name: query query_column_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.query
+ ADD CONSTRAINT query_column_id_fkey FOREIGN KEY (column_id) REFERENCES app."column"(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
+
+
+--
+-- TOC entry 4126 (class 2606 OID 18065)
+-- Name: query query_field_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.query
+ ADD CONSTRAINT query_field_id_fkey FOREIGN KEY (field_id) REFERENCES app.field(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
+
+
+--
+-- TOC entry 4131 (class 2606 OID 18070)
+-- Name: query_filter query_filter_query_choice_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.query_filter
+ ADD CONSTRAINT query_filter_query_choice_id_fkey FOREIGN KEY (query_choice_id) REFERENCES app.query_choice(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4132 (class 2606 OID 18075)
+-- Name: query_filter query_filter_query_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.query_filter
+ ADD CONSTRAINT query_filter_query_id_fkey FOREIGN KEY (query_id) REFERENCES app.query(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
+
+
+--
+-- TOC entry 4133 (class 2606 OID 18080)
+-- Name: query_filter_side query_filter_side_attribute_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.query_filter_side
+ ADD CONSTRAINT query_filter_side_attribute_id_fkey FOREIGN KEY (attribute_id) REFERENCES app.attribute(id) DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4134 (class 2606 OID 18085)
+-- Name: query_filter_side query_filter_side_collection_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.query_filter_side
+ ADD CONSTRAINT query_filter_side_collection_id_fkey FOREIGN KEY (collection_id) REFERENCES app.collection(id) DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4135 (class 2606 OID 18090)
+-- Name: query_filter_side query_filter_side_column_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.query_filter_side
+ ADD CONSTRAINT query_filter_side_column_id_fkey FOREIGN KEY (column_id) REFERENCES app."column"(id) DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4136 (class 2606 OID 18095)
+-- Name: query_filter_side query_filter_side_field_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.query_filter_side
+ ADD CONSTRAINT query_filter_side_field_id_fkey FOREIGN KEY (field_id) REFERENCES app.field(id) DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4137 (class 2606 OID 18100)
+-- Name: query_filter_side query_filter_side_preset_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.query_filter_side
+ ADD CONSTRAINT query_filter_side_preset_id_fkey FOREIGN KEY (preset_id) REFERENCES app.preset(id) DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4138 (class 2606 OID 19526)
+-- Name: query_filter_side query_filter_side_query_filter_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.query_filter_side
+ ADD CONSTRAINT query_filter_side_query_filter_fkey FOREIGN KEY (query_id, query_filter_index, query_filter_position) REFERENCES app.query_filter(query_id, index, "position") ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4139 (class 2606 OID 18105)
+-- Name: query_filter_side query_filter_side_query_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.query_filter_side
+ ADD CONSTRAINT query_filter_side_query_id_fkey FOREIGN KEY (query_id) REFERENCES app.query(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4140 (class 2606 OID 18115)
+-- Name: query_filter_side query_filter_side_role_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+--
+
+ALTER TABLE ONLY app.query_filter_side
+ ADD CONSTRAINT query_filter_side_role_id_fkey FOREIGN KEY (role_id) REFERENCES app.role(id) DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3652 (class 2606 OID 17473)
--- Name: menu menu_module_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4141 (class 2606 OID 19476)
+-- Name: query_filter_side query_filter_side_variable_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.menu
- ADD CONSTRAINT menu_module_id_fkey FOREIGN KEY (module_id) REFERENCES app.module(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY app.query_filter_side
+ ADD CONSTRAINT query_filter_side_variable_id_fkey FOREIGN KEY (variable_id) REFERENCES app.variable(id) DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3653 (class 2606 OID 17478)
--- Name: menu menu_parent_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4127 (class 2606 OID 19531)
+-- Name: query query_filter_subquery_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.menu
- ADD CONSTRAINT menu_parent_id_fkey FOREIGN KEY (parent_id) REFERENCES app.menu(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
+ALTER TABLE ONLY app.query
+ ADD CONSTRAINT query_filter_subquery_fkey FOREIGN KEY (query_filter_query_id, query_filter_index, query_filter_position, query_filter_side) REFERENCES app.query_filter_side(query_id, query_filter_index, query_filter_position, side) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3657 (class 2606 OID 17483)
--- Name: module_depends module_depends_module_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4128 (class 2606 OID 18125)
+-- Name: query query_form_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.module_depends
- ADD CONSTRAINT module_depends_module_id_fkey FOREIGN KEY (module_id) REFERENCES app.module(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY app.query
+ ADD CONSTRAINT query_form_id_fkey FOREIGN KEY (form_id) REFERENCES app.form(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
--
--- TOC entry 3658 (class 2606 OID 17488)
--- Name: module_depends module_depends_module_id_on_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4142 (class 2606 OID 18130)
+-- Name: query_join query_join_attribute_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.module_depends
- ADD CONSTRAINT module_depends_module_id_on_fkey FOREIGN KEY (module_id_on) REFERENCES app.module(id) DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY app.query_join
+ ADD CONSTRAINT query_join_attribute_id_fkey FOREIGN KEY (attribute_id) REFERENCES app.attribute(id) DEFERRABLE INITIALLY DEFERRED NOT VALID;
--
--- TOC entry 3654 (class 2606 OID 17493)
--- Name: module module_form_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4143 (class 2606 OID 18135)
+-- Name: query_join query_join_query_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.module
- ADD CONSTRAINT module_form_id_fkey FOREIGN KEY (form_id) REFERENCES app.form(id) ON UPDATE SET NULL ON DELETE SET NULL DEFERRABLE INITIALLY DEFERRED NOT VALID;
+ALTER TABLE ONLY app.query_join
+ ADD CONSTRAINT query_join_query_id_fkey FOREIGN KEY (query_id) REFERENCES app.query(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
--
--- TOC entry 3655 (class 2606 OID 17498)
--- Name: module module_icon_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4144 (class 2606 OID 18140)
+-- Name: query_join query_join_relation_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.module
- ADD CONSTRAINT module_icon_id_fkey FOREIGN KEY (icon_id) REFERENCES app.icon(id) DEFERRABLE INITIALLY DEFERRED NOT VALID;
+ALTER TABLE ONLY app.query_join
+ ADD CONSTRAINT query_join_relation_id_fkey FOREIGN KEY (relation_id) REFERENCES app.relation(id) DEFERRABLE INITIALLY DEFERRED NOT VALID;
--
--- TOC entry 3659 (class 2606 OID 17503)
--- Name: module_language module_language_module_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4145 (class 2606 OID 18145)
+-- Name: query_lookup query_lookup_pg_index_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.module_language
- ADD CONSTRAINT module_language_module_id_fkey FOREIGN KEY (module_id) REFERENCES app.module(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY app.query_lookup
+ ADD CONSTRAINT query_lookup_pg_index_id_fkey FOREIGN KEY (pg_index_id) REFERENCES app.pg_index(id) DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3656 (class 2606 OID 17508)
--- Name: module module_parent_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4146 (class 2606 OID 18150)
+-- Name: query_lookup query_lookup_query_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.module
- ADD CONSTRAINT module_parent_id_fkey FOREIGN KEY (parent_id) REFERENCES app.module(id) DEFERRABLE INITIALLY DEFERRED NOT VALID;
+ALTER TABLE ONLY app.query_lookup
+ ADD CONSTRAINT query_lookup_query_id_fkey FOREIGN KEY (query_id) REFERENCES app.query(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3738 (class 2606 OID 18019)
--- Name: module_start_form module_start_form_form_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4147 (class 2606 OID 18155)
+-- Name: query_order query_order_attribute_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.module_start_form
- ADD CONSTRAINT module_start_form_form_id_fkey FOREIGN KEY (form_id) REFERENCES app.form(id) DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY app.query_order
+ ADD CONSTRAINT query_order_attribute_id_fkey FOREIGN KEY (attribute_id) REFERENCES app.attribute(id) DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3739 (class 2606 OID 18024)
--- Name: module_start_form module_start_form_module_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4148 (class 2606 OID 18160)
+-- Name: query_order query_order_query_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.module_start_form
- ADD CONSTRAINT module_start_form_module_id_fkey FOREIGN KEY (module_id) REFERENCES app.module(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY app.query_order
+ ADD CONSTRAINT query_order_query_id_fkey FOREIGN KEY (query_id) REFERENCES app.query(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3740 (class 2606 OID 18029)
--- Name: module_start_form module_start_form_role_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4129 (class 2606 OID 18165)
+-- Name: query query_relation_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.module_start_form
- ADD CONSTRAINT module_start_form_role_id_fkey FOREIGN KEY (role_id) REFERENCES app.role(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY app.query
+ ADD CONSTRAINT query_relation_id_fkey FOREIGN KEY (relation_id) REFERENCES app.relation(id) DEFERRABLE INITIALLY DEFERRED NOT VALID;
--
--- TOC entry 3744 (class 2606 OID 18064)
--- Name: open_form open_form_attribute_id_apply_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4149 (class 2606 OID 18170)
+-- Name: relation relation_module_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.open_form
- ADD CONSTRAINT open_form_attribute_id_apply_fkey FOREIGN KEY (attribute_id_apply) REFERENCES app.attribute(id) DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY app.relation
+ ADD CONSTRAINT relation_module_id_fkey FOREIGN KEY (module_id) REFERENCES app.module(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
--
--- TOC entry 3745 (class 2606 OID 18406)
--- Name: open_form open_form_collection_consumer_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4155 (class 2606 OID 18681)
+-- Name: role_access role_access_api_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.open_form
- ADD CONSTRAINT open_form_collection_consumer_id_fkey FOREIGN KEY (collection_consumer_id) REFERENCES app.collection_consumer(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY app.role_access
+ ADD CONSTRAINT role_access_api_id_fkey FOREIGN KEY (api_id) REFERENCES app.api(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3741 (class 2606 OID 18049)
--- Name: open_form open_form_column_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4156 (class 2606 OID 18175)
+-- Name: role_access role_access_attribute_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.open_form
- ADD CONSTRAINT open_form_column_id_fkey FOREIGN KEY (column_id) REFERENCES app."column"(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY app.role_access
+ ADD CONSTRAINT role_access_attribute_id_fkey FOREIGN KEY (attribute_id) REFERENCES app.attribute(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
--
--- TOC entry 3742 (class 2606 OID 18054)
--- Name: open_form open_form_field_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4157 (class 2606 OID 19341)
+-- Name: role_access role_access_client_event_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.open_form
- ADD CONSTRAINT open_form_field_id_fkey FOREIGN KEY (field_id) REFERENCES app.field(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY app.role_access
+ ADD CONSTRAINT role_access_client_event_id_fkey FOREIGN KEY (client_event_id) REFERENCES app.client_event(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3743 (class 2606 OID 18059)
--- Name: open_form open_form_form_id_open_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4158 (class 2606 OID 18180)
+-- Name: role_access role_access_collection_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.open_form
- ADD CONSTRAINT open_form_form_id_open_fkey FOREIGN KEY (form_id_open) REFERENCES app.form(id) DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY app.role_access
+ ADD CONSTRAINT role_access_collection_id_fkey FOREIGN KEY (collection_id) REFERENCES app.collection(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3661 (class 2606 OID 17513)
--- Name: pg_function_depends pg_function_depends_attribute_id_on_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4159 (class 2606 OID 18185)
+-- Name: role_access role_access_menu_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.pg_function_depends
- ADD CONSTRAINT pg_function_depends_attribute_id_on_fkey FOREIGN KEY (attribute_id_on) REFERENCES app.attribute(id) DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY app.role_access
+ ADD CONSTRAINT role_access_menu_id_fkey FOREIGN KEY (menu_id) REFERENCES app.menu(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
--
--- TOC entry 3662 (class 2606 OID 17518)
--- Name: pg_function_depends pg_function_depends_module_id_on_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4160 (class 2606 OID 18190)
+-- Name: role_access role_access_relation_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.pg_function_depends
- ADD CONSTRAINT pg_function_depends_module_id_on_fkey FOREIGN KEY (module_id_on) REFERENCES app.module(id) DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY app.role_access
+ ADD CONSTRAINT role_access_relation_id_fkey FOREIGN KEY (relation_id) REFERENCES app.relation(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
--
--- TOC entry 3663 (class 2606 OID 17523)
--- Name: pg_function_depends pg_function_depends_pg_function_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4161 (class 2606 OID 18195)
+-- Name: role_access role_access_role_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.pg_function_depends
- ADD CONSTRAINT pg_function_depends_pg_function_id_fkey FOREIGN KEY (pg_function_id) REFERENCES app.pg_function(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY app.role_access
+ ADD CONSTRAINT role_access_role_id_fkey FOREIGN KEY (role_id) REFERENCES app.role(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
--
--- TOC entry 3664 (class 2606 OID 17528)
--- Name: pg_function_depends pg_function_depends_pg_function_id_on_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4162 (class 2606 OID 18971)
+-- Name: role_access role_access_widget_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.pg_function_depends
- ADD CONSTRAINT pg_function_depends_pg_function_id_on_fkey FOREIGN KEY (pg_function_id_on) REFERENCES app.pg_function(id) DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY app.role_access
+ ADD CONSTRAINT role_access_widget_id_fkey FOREIGN KEY (widget_id) REFERENCES app.widget(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3665 (class 2606 OID 17533)
--- Name: pg_function_depends pg_function_depends_relation_id_on_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4163 (class 2606 OID 18200)
+-- Name: role_child role_child_role_id_child_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.pg_function_depends
- ADD CONSTRAINT pg_function_depends_relation_id_on_fkey FOREIGN KEY (relation_id_on) REFERENCES app.relation(id) DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY app.role_child
+ ADD CONSTRAINT role_child_role_id_child_fkey FOREIGN KEY (role_id_child) REFERENCES app.role(id) DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3660 (class 2606 OID 17538)
--- Name: pg_function pg_function_module_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4164 (class 2606 OID 18205)
+-- Name: role_child role_child_role_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.pg_function
- ADD CONSTRAINT pg_function_module_id_fkey FOREIGN KEY (module_id) REFERENCES app.module(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY app.role_child
+ ADD CONSTRAINT role_child_role_id_fkey FOREIGN KEY (role_id) REFERENCES app.role(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3666 (class 2606 OID 17543)
--- Name: pg_function_schedule pg_function_schedule_pg_function_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4154 (class 2606 OID 18210)
+-- Name: role role_module_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.pg_function_schedule
- ADD CONSTRAINT pg_function_schedule_pg_function_id_fkey FOREIGN KEY (pg_function_id) REFERENCES app.pg_function(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY app.role
+ ADD CONSTRAINT role_module_id_fkey FOREIGN KEY (module_id) REFERENCES app.module(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
--
--- TOC entry 3668 (class 2606 OID 17548)
--- Name: pg_index_attribute pg_index_attribute_attribute_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4193 (class 2606 OID 18411)
+-- Name: tab tab_field_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.pg_index_attribute
- ADD CONSTRAINT pg_index_attribute_attribute_id_fkey FOREIGN KEY (attribute_id) REFERENCES app.attribute(id) DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY app.tab
+ ADD CONSTRAINT tab_field_id_fkey FOREIGN KEY (field_id) REFERENCES app.field(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3669 (class 2606 OID 17553)
--- Name: pg_index_attribute pg_index_attribute_pg_index_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4027 (class 2606 OID 18417)
+-- Name: field tab_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.pg_index_attribute
- ADD CONSTRAINT pg_index_attribute_pg_index_id_fkey FOREIGN KEY (pg_index_id) REFERENCES app.pg_index(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY app.field
+ ADD CONSTRAINT tab_id_fkey FOREIGN KEY (tab_id) REFERENCES app.tab(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3667 (class 2606 OID 17558)
--- Name: pg_index pg_index_relation_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4242 (class 2606 OID 19450)
+-- Name: variable variable_form_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.pg_index
- ADD CONSTRAINT pg_index_relation_id_fkey FOREIGN KEY (relation_id) REFERENCES app.relation(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY app.variable
+ ADD CONSTRAINT variable_form_id_fkey FOREIGN KEY (form_id) REFERENCES app.form(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3670 (class 2606 OID 17563)
--- Name: pg_trigger pg_trigger_pg_function_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4243 (class 2606 OID 19455)
+-- Name: variable variable_module_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.pg_trigger
- ADD CONSTRAINT pg_trigger_pg_function_id_fkey FOREIGN KEY (pg_function_id) REFERENCES app.pg_function(id) DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY app.variable
+ ADD CONSTRAINT variable_module_id_fkey FOREIGN KEY (module_id) REFERENCES app.module(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3671 (class 2606 OID 17568)
--- Name: pg_trigger pg_trigger_relation_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4207 (class 2606 OID 18945)
+-- Name: widget widget_form_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.pg_trigger
- ADD CONSTRAINT pg_trigger_relation_id_fkey FOREIGN KEY (relation_id) REFERENCES app.relation(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY app.widget
+ ADD CONSTRAINT widget_form_id_fkey FOREIGN KEY (form_id) REFERENCES app.form(id) DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3734 (class 2606 OID 17985)
--- Name: relation_policy policy_pg_function_id_excl_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4208 (class 2606 OID 18950)
+-- Name: widget widget_module_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
--
-ALTER TABLE ONLY app.relation_policy
- ADD CONSTRAINT policy_pg_function_id_excl_fkey FOREIGN KEY (pg_function_id_excl) REFERENCES app.pg_function(id) DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY app.widget
+ ADD CONSTRAINT widget_module_id_fkey FOREIGN KEY (module_id) REFERENCES app.module(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3735 (class 2606 OID 17990)
--- Name: relation_policy policy_pg_function_id_incl_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4213 (class 2606 OID 19073)
+-- Name: caption caption_article_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.relation_policy
- ADD CONSTRAINT policy_pg_function_id_incl_fkey FOREIGN KEY (pg_function_id_incl) REFERENCES app.pg_function(id) DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY instance.caption
+ ADD CONSTRAINT caption_article_id_fkey FOREIGN KEY (article_id) REFERENCES app.article(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3736 (class 2606 OID 17995)
--- Name: relation_policy policy_relation_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4214 (class 2606 OID 19078)
+-- Name: caption caption_attribute_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.relation_policy
- ADD CONSTRAINT policy_relation_id_fkey FOREIGN KEY (relation_id) REFERENCES app.relation(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY instance.caption
+ ADD CONSTRAINT caption_attribute_id_fkey FOREIGN KEY (attribute_id) REFERENCES app.attribute(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3737 (class 2606 OID 18000)
--- Name: relation_policy policy_role_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4215 (class 2606 OID 19335)
+-- Name: caption caption_client_event_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.relation_policy
- ADD CONSTRAINT policy_role_id_fkey FOREIGN KEY (role_id) REFERENCES app.role(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY instance.caption
+ ADD CONSTRAINT caption_client_event_id_fkey FOREIGN KEY (client_event_id) REFERENCES app.client_event(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3672 (class 2606 OID 17573)
--- Name: preset preset_relation_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4216 (class 2606 OID 19083)
+-- Name: caption caption_column_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.preset
- ADD CONSTRAINT preset_relation_id_fkey FOREIGN KEY (relation_id) REFERENCES app.relation(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
+ALTER TABLE ONLY instance.caption
+ ADD CONSTRAINT caption_column_id_fkey FOREIGN KEY (column_id) REFERENCES app."column"(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3673 (class 2606 OID 17578)
--- Name: preset_value preset_value_attribute_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4217 (class 2606 OID 19088)
+-- Name: caption caption_field_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.preset_value
- ADD CONSTRAINT preset_value_attribute_id_fkey FOREIGN KEY (attribute_id) REFERENCES app.attribute(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
+ALTER TABLE ONLY instance.caption
+ ADD CONSTRAINT caption_field_id_fkey FOREIGN KEY (field_id) REFERENCES app.field(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3674 (class 2606 OID 17583)
--- Name: preset_value preset_value_preset_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4218 (class 2606 OID 19233)
+-- Name: caption caption_form_action_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.preset_value
- ADD CONSTRAINT preset_value_preset_id_fkey FOREIGN KEY (preset_id) REFERENCES app.preset(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
+ALTER TABLE ONLY instance.caption
+ ADD CONSTRAINT caption_form_action_id_fkey FOREIGN KEY (form_action_id) REFERENCES app.form_action(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3675 (class 2606 OID 17588)
--- Name: preset_value preset_value_preset_id_refer_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4219 (class 2606 OID 19093)
+-- Name: caption caption_form_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.preset_value
- ADD CONSTRAINT preset_value_preset_id_refer_fkey FOREIGN KEY (preset_id_refer) REFERENCES app.preset(id) DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY instance.caption
+ ADD CONSTRAINT caption_form_id_fkey FOREIGN KEY (form_id) REFERENCES app.form(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3682 (class 2606 OID 17593)
--- Name: query_choice query_choice_query_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4220 (class 2606 OID 19098)
+-- Name: caption caption_js_function_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.query_choice
- ADD CONSTRAINT query_choice_query_id_fkey FOREIGN KEY (query_id) REFERENCES app.query(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY instance.caption
+ ADD CONSTRAINT caption_js_function_id_fkey FOREIGN KEY (js_function_id) REFERENCES app.js_function(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3681 (class 2606 OID 18200)
--- Name: query query_collection_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4221 (class 2606 OID 19103)
+-- Name: caption caption_login_form_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.query
- ADD CONSTRAINT query_collection_id_fkey FOREIGN KEY (collection_id) REFERENCES app.collection(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY instance.caption
+ ADD CONSTRAINT caption_login_form_id_fkey FOREIGN KEY (login_form_id) REFERENCES app.login_form(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3676 (class 2606 OID 17598)
--- Name: query query_column_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4222 (class 2606 OID 19108)
+-- Name: caption caption_menu_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.query
- ADD CONSTRAINT query_column_id_fkey FOREIGN KEY (column_id) REFERENCES app."column"(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
+ALTER TABLE ONLY instance.caption
+ ADD CONSTRAINT caption_menu_id_fkey FOREIGN KEY (menu_id) REFERENCES app.menu(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3677 (class 2606 OID 17603)
--- Name: query query_field_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4223 (class 2606 OID 19587)
+-- Name: caption caption_menu_tab_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.query
- ADD CONSTRAINT query_field_id_fkey FOREIGN KEY (field_id) REFERENCES app.field(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
+ALTER TABLE ONLY instance.caption
+ ADD CONSTRAINT caption_menu_tab_id_fkey FOREIGN KEY (menu_tab_id) REFERENCES app.menu_tab(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3683 (class 2606 OID 17608)
--- Name: query_filter query_filter_query_choice_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4224 (class 2606 OID 19113)
+-- Name: caption caption_module_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.query_filter
- ADD CONSTRAINT query_filter_query_choice_id_fkey FOREIGN KEY (query_choice_id) REFERENCES app.query_choice(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY instance.caption
+ ADD CONSTRAINT caption_module_id_fkey FOREIGN KEY (module_id) REFERENCES app.module(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3684 (class 2606 OID 17613)
--- Name: query_filter query_filter_query_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4225 (class 2606 OID 19118)
+-- Name: caption caption_pg_function_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.query_filter
- ADD CONSTRAINT query_filter_query_id_fkey FOREIGN KEY (query_id) REFERENCES app.query(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
+ALTER TABLE ONLY instance.caption
+ ADD CONSTRAINT caption_pg_function_id_fkey FOREIGN KEY (pg_function_id) REFERENCES app.pg_function(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3685 (class 2606 OID 17618)
--- Name: query_filter_side query_filter_side_attribute_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4226 (class 2606 OID 19123)
+-- Name: caption caption_query_choice_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.query_filter_side
- ADD CONSTRAINT query_filter_side_attribute_id_fkey FOREIGN KEY (attribute_id) REFERENCES app.attribute(id) DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY instance.caption
+ ADD CONSTRAINT caption_query_choice_id_fkey FOREIGN KEY (query_choice_id) REFERENCES app.query_choice(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3691 (class 2606 OID 18209)
--- Name: query_filter_side query_filter_side_collection_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4227 (class 2606 OID 19128)
+-- Name: caption caption_role_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.query_filter_side
- ADD CONSTRAINT query_filter_side_collection_id_fkey FOREIGN KEY (collection_id) REFERENCES app.collection(id) DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY instance.caption
+ ADD CONSTRAINT caption_role_id_fkey FOREIGN KEY (role_id) REFERENCES app.role(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3692 (class 2606 OID 18214)
--- Name: query_filter_side query_filter_side_column_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4228 (class 2606 OID 19133)
+-- Name: caption caption_tab_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.query_filter_side
- ADD CONSTRAINT query_filter_side_column_id_fkey FOREIGN KEY (column_id) REFERENCES app."column"(id) DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY instance.caption
+ ADD CONSTRAINT caption_tab_id_fkey FOREIGN KEY (tab_id) REFERENCES app.tab(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3686 (class 2606 OID 17623)
--- Name: query_filter_side query_filter_side_field_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4229 (class 2606 OID 19138)
+-- Name: caption caption_widget_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.query_filter_side
- ADD CONSTRAINT query_filter_side_field_id_fkey FOREIGN KEY (field_id) REFERENCES app.field(id) DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY instance.caption
+ ADD CONSTRAINT caption_widget_id_fkey FOREIGN KEY (widget_id) REFERENCES app.widget(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3690 (class 2606 OID 18039)
--- Name: query_filter_side query_filter_side_preset_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4165 (class 2606 OID 18215)
+-- Name: data_log data_log_relation_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.query_filter_side
- ADD CONSTRAINT query_filter_side_preset_id_fkey FOREIGN KEY (preset_id) REFERENCES app.preset(id) DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY instance.data_log
+ ADD CONSTRAINT data_log_relation_id_fkey FOREIGN KEY (relation_id) REFERENCES app.relation(id) ON UPDATE CASCADE ON DELETE CASCADE;
--
--- TOC entry 3687 (class 2606 OID 17628)
--- Name: query_filter_side query_filter_side_query_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4166 (class 2606 OID 18220)
+-- Name: data_log_value data_log_value_attribute_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.query_filter_side
- ADD CONSTRAINT query_filter_side_query_id_fkey FOREIGN KEY (query_id) REFERENCES app.query(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY instance.data_log_value
+ ADD CONSTRAINT data_log_value_attribute_id_fkey FOREIGN KEY (attribute_id) REFERENCES app.attribute(id) ON UPDATE CASCADE ON DELETE CASCADE;
--
--- TOC entry 3688 (class 2606 OID 17633)
--- Name: query_filter_side query_filter_side_query_id_query_filter_position_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4167 (class 2606 OID 18225)
+-- Name: data_log_value data_log_value_attribute_id_nm_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.query_filter_side
- ADD CONSTRAINT query_filter_side_query_id_query_filter_position_fkey FOREIGN KEY (query_id, query_filter_position) REFERENCES app.query_filter(query_id, "position") ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY instance.data_log_value
+ ADD CONSTRAINT data_log_value_attribute_id_nm_fkey FOREIGN KEY (attribute_id_nm) REFERENCES app.attribute(id) ON UPDATE CASCADE ON DELETE CASCADE;
--
--- TOC entry 3689 (class 2606 OID 17638)
--- Name: query_filter_side query_filter_side_role_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4168 (class 2606 OID 18230)
+-- Name: data_log_value date_log_value_data_log_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.query_filter_side
- ADD CONSTRAINT query_filter_side_role_id_fkey FOREIGN KEY (role_id) REFERENCES app.role(id) DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY instance.data_log_value
+ ADD CONSTRAINT date_log_value_data_log_id_fkey FOREIGN KEY (data_log_id) REFERENCES instance.data_log(id) ON UPDATE CASCADE ON DELETE CASCADE;
--
--- TOC entry 3678 (class 2606 OID 17643)
--- Name: query query_filter_subquery_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4191 (class 2606 OID 18377)
+-- Name: file_version file_version_file_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.query
- ADD CONSTRAINT query_filter_subquery_fkey FOREIGN KEY (query_filter_side, query_filter_position, query_filter_query_id) REFERENCES app.query_filter_side(side, query_filter_position, query_id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY instance.file_version
+ ADD CONSTRAINT file_version_file_id_fkey FOREIGN KEY (file_id) REFERENCES instance.file(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3679 (class 2606 OID 17648)
--- Name: query query_form_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4192 (class 2606 OID 18382)
+-- Name: file_version file_version_login_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.query
- ADD CONSTRAINT query_form_id_fkey FOREIGN KEY (form_id) REFERENCES app.form(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
+ALTER TABLE ONLY instance.file_version
+ ADD CONSTRAINT file_version_login_id_fkey FOREIGN KEY (login_id) REFERENCES instance.login(id) ON UPDATE SET NULL ON DELETE SET NULL DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3693 (class 2606 OID 17653)
--- Name: query_join query_join_attribute_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4241 (class 2606 OID 19421)
+-- Name: ldap_attribute_login_meta ldap_attribute_login_meta_ldap_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.query_join
- ADD CONSTRAINT query_join_attribute_id_fkey FOREIGN KEY (attribute_id) REFERENCES app.attribute(id) DEFERRABLE INITIALLY DEFERRED NOT VALID;
+ALTER TABLE ONLY instance.ldap_attribute_login_meta
+ ADD CONSTRAINT ldap_attribute_login_meta_ldap_id_fkey FOREIGN KEY (ldap_id) REFERENCES instance.ldap(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3694 (class 2606 OID 17658)
--- Name: query_join query_join_query_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4169 (class 2606 OID 18718)
+-- Name: ldap ldap_login_template_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.query_join
- ADD CONSTRAINT query_join_query_id_fkey FOREIGN KEY (query_id) REFERENCES app.query(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
+ALTER TABLE ONLY instance.ldap
+ ADD CONSTRAINT ldap_login_template_id_fkey FOREIGN KEY (login_template_id) REFERENCES instance.login_template(id) ON UPDATE SET NULL ON DELETE SET NULL DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3695 (class 2606 OID 17663)
--- Name: query_join query_join_relation_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4170 (class 2606 OID 18235)
+-- Name: ldap_role ldap_role_ldap_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.query_join
- ADD CONSTRAINT query_join_relation_id_fkey FOREIGN KEY (relation_id) REFERENCES app.relation(id) DEFERRABLE INITIALLY DEFERRED NOT VALID;
+ALTER TABLE ONLY instance.ldap_role
+ ADD CONSTRAINT ldap_role_ldap_id_fkey FOREIGN KEY (ldap_id) REFERENCES instance.ldap(id) ON UPDATE CASCADE ON DELETE CASCADE;
--
--- TOC entry 3696 (class 2606 OID 17668)
--- Name: query_lookup query_lookup_pg_index_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4171 (class 2606 OID 18240)
+-- Name: ldap_role ldap_role_role_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.query_lookup
- ADD CONSTRAINT query_lookup_pg_index_id_fkey FOREIGN KEY (pg_index_id) REFERENCES app.pg_index(id) DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY instance.ldap_role
+ ADD CONSTRAINT ldap_role_role_id_fkey FOREIGN KEY (role_id) REFERENCES app.role(id) ON UPDATE CASCADE ON DELETE CASCADE;
--
--- TOC entry 3697 (class 2606 OID 17673)
--- Name: query_lookup query_lookup_query_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4172 (class 2606 OID 18245)
+-- Name: log log_module_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.query_lookup
- ADD CONSTRAINT query_lookup_query_id_fkey FOREIGN KEY (query_id) REFERENCES app.query(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY instance.log
+ ADD CONSTRAINT log_module_id_fkey FOREIGN KEY (module_id) REFERENCES app.module(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3698 (class 2606 OID 17678)
--- Name: query_order query_order_attribute_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4173 (class 2606 OID 18250)
+-- Name: log log_node_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.query_order
- ADD CONSTRAINT query_order_attribute_id_fkey FOREIGN KEY (attribute_id) REFERENCES app.attribute(id) DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY instance.log
+ ADD CONSTRAINT log_node_id_fkey FOREIGN KEY (node_id) REFERENCES instance_cluster.node(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3699 (class 2606 OID 17683)
--- Name: query_order query_order_query_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4236 (class 2606 OID 19316)
+-- Name: login_client_event login_client_event_client_event_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.query_order
- ADD CONSTRAINT query_order_query_id_fkey FOREIGN KEY (query_id) REFERENCES app.query(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY instance.login_client_event
+ ADD CONSTRAINT login_client_event_client_event_id_fkey FOREIGN KEY (client_event_id) REFERENCES app.client_event(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3680 (class 2606 OID 17688)
--- Name: query query_relation_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4237 (class 2606 OID 19321)
+-- Name: login_client_event login_client_event_login_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.query
- ADD CONSTRAINT query_relation_id_fkey FOREIGN KEY (relation_id) REFERENCES app.relation(id) DEFERRABLE INITIALLY DEFERRED NOT VALID;
+ALTER TABLE ONLY instance.login_client_event
+ ADD CONSTRAINT login_client_event_login_id_fkey FOREIGN KEY (login_id) REFERENCES instance.login(id) ON UPDATE CASCADE ON DELETE CASCADE;
--
--- TOC entry 3700 (class 2606 OID 17693)
--- Name: relation relation_module_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4249 (class 2606 OID 19621)
+-- Name: login_favorite login_favorite_form_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.relation
- ADD CONSTRAINT relation_module_id_fkey FOREIGN KEY (module_id) REFERENCES app.module(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
+ALTER TABLE ONLY instance.login_favorite
+ ADD CONSTRAINT login_favorite_form_id_fkey FOREIGN KEY (form_id) REFERENCES app.form(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3702 (class 2606 OID 17698)
--- Name: role_access role_access_attribute_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4250 (class 2606 OID 19611)
+-- Name: login_favorite login_favorite_login_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.role_access
- ADD CONSTRAINT role_access_attribute_id_fkey FOREIGN KEY (attribute_id) REFERENCES app.attribute(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
+ALTER TABLE ONLY instance.login_favorite
+ ADD CONSTRAINT login_favorite_login_id_fkey FOREIGN KEY (login_id) REFERENCES instance.login(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3706 (class 2606 OID 18221)
--- Name: role_access role_access_collection_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4251 (class 2606 OID 19616)
+-- Name: login_favorite login_favorite_module_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.role_access
- ADD CONSTRAINT role_access_collection_id_fkey FOREIGN KEY (collection_id) REFERENCES app.collection(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY instance.login_favorite
+ ADD CONSTRAINT login_favorite_module_id_fkey FOREIGN KEY (module_id) REFERENCES app.module(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3703 (class 2606 OID 17703)
--- Name: role_access role_access_menu_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4174 (class 2606 OID 19685)
+-- Name: login login_ldap_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.role_access
- ADD CONSTRAINT role_access_menu_id_fkey FOREIGN KEY (menu_id) REFERENCES app.menu(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
+ALTER TABLE ONLY instance.login
+ ADD CONSTRAINT login_ldap_id_fkey FOREIGN KEY (ldap_id) REFERENCES instance.ldap(id);
--
--- TOC entry 3704 (class 2606 OID 17708)
--- Name: role_access role_access_relation_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4240 (class 2606 OID 19398)
+-- Name: login_meta login_meta_login_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.role_access
- ADD CONSTRAINT role_access_relation_id_fkey FOREIGN KEY (relation_id) REFERENCES app.relation(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
+ALTER TABLE ONLY instance.login_meta
+ ADD CONSTRAINT login_meta_login_id_fkey FOREIGN KEY (login_id) REFERENCES instance.login(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3705 (class 2606 OID 17713)
--- Name: role_access role_access_role_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4252 (class 2606 OID 19635)
+-- Name: login_options login_options_field_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.role_access
- ADD CONSTRAINT role_access_role_id_fkey FOREIGN KEY (role_id) REFERENCES app.role(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
+ALTER TABLE ONLY instance.login_options
+ ADD CONSTRAINT login_options_field_id_fkey FOREIGN KEY (field_id) REFERENCES app.field(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3707 (class 2606 OID 17718)
--- Name: role_child role_child_role_id_child_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4253 (class 2606 OID 19640)
+-- Name: login_options login_options_login_favorite_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.role_child
- ADD CONSTRAINT role_child_role_id_child_fkey FOREIGN KEY (role_id_child) REFERENCES app.role(id) DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY instance.login_options
+ ADD CONSTRAINT login_options_login_favorite_id_fkey FOREIGN KEY (login_favorite_id) REFERENCES instance.login_favorite(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3708 (class 2606 OID 17723)
--- Name: role_child role_child_role_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4254 (class 2606 OID 19645)
+-- Name: login_options login_options_login_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.role_child
- ADD CONSTRAINT role_child_role_id_fkey FOREIGN KEY (role_id) REFERENCES app.role(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY instance.login_options
+ ADD CONSTRAINT login_options_login_id_fkey FOREIGN KEY (login_id) REFERENCES instance.login(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3701 (class 2606 OID 17728)
--- Name: role role_module_id_fkey; Type: FK CONSTRAINT; Schema: app; Owner: -
+-- TOC entry 4175 (class 2606 OID 18260)
+-- Name: login_role login_role_login_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
--
-ALTER TABLE ONLY app.role
- ADD CONSTRAINT role_module_id_fkey FOREIGN KEY (module_id) REFERENCES app.module(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED NOT VALID;
+ALTER TABLE ONLY instance.login_role
+ ADD CONSTRAINT login_role_login_id_fkey FOREIGN KEY (login_id) REFERENCES instance.login(id) ON UPDATE CASCADE ON DELETE CASCADE;
--
--- TOC entry 3709 (class 2606 OID 17733)
--- Name: data_log data_log_relation_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
+-- TOC entry 4176 (class 2606 OID 18265)
+-- Name: login_role login_role_role_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
--
-ALTER TABLE ONLY instance.data_log
- ADD CONSTRAINT data_log_relation_id_fkey FOREIGN KEY (relation_id) REFERENCES app.relation(id) ON UPDATE CASCADE ON DELETE CASCADE;
+ALTER TABLE ONLY instance.login_role
+ ADD CONSTRAINT login_role_role_id_fkey FOREIGN KEY (role_id) REFERENCES app.role(id) ON UPDATE CASCADE ON DELETE CASCADE;
--
--- TOC entry 3710 (class 2606 OID 17738)
--- Name: data_log_value data_log_value_attribute_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
+-- TOC entry 4200 (class 2606 OID 18757)
+-- Name: login_search_dict login_search_dict_login_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
--
-ALTER TABLE ONLY instance.data_log_value
- ADD CONSTRAINT data_log_value_attribute_id_fkey FOREIGN KEY (attribute_id) REFERENCES app.attribute(id) ON UPDATE CASCADE ON DELETE CASCADE;
+ALTER TABLE ONLY instance.login_search_dict
+ ADD CONSTRAINT login_search_dict_login_id_fkey FOREIGN KEY (login_id) REFERENCES instance.login(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3711 (class 2606 OID 17743)
--- Name: data_log_value data_log_value_attribute_id_nm_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
+-- TOC entry 4201 (class 2606 OID 18762)
+-- Name: login_search_dict login_search_dict_login_template_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
--
-ALTER TABLE ONLY instance.data_log_value
- ADD CONSTRAINT data_log_value_attribute_id_nm_fkey FOREIGN KEY (attribute_id_nm) REFERENCES app.attribute(id) ON UPDATE CASCADE ON DELETE CASCADE;
+ALTER TABLE ONLY instance.login_search_dict
+ ADD CONSTRAINT login_search_dict_login_template_id_fkey FOREIGN KEY (login_template_id) REFERENCES instance.login_template(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3712 (class 2606 OID 17748)
--- Name: data_log_value date_log_value_data_log_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
+-- TOC entry 4238 (class 2606 OID 19377)
+-- Name: login_session login_session_login_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
--
-ALTER TABLE ONLY instance.data_log_value
- ADD CONSTRAINT date_log_value_data_log_id_fkey FOREIGN KEY (data_log_id) REFERENCES instance.data_log(id) ON UPDATE CASCADE ON DELETE CASCADE;
+ALTER TABLE ONLY instance.login_session
+ ADD CONSTRAINT login_session_login_id_fkey FOREIGN KEY (login_id) REFERENCES instance.login(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3713 (class 2606 OID 17753)
--- Name: ldap_role ldap_role_ldap_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
+-- TOC entry 4239 (class 2606 OID 19382)
+-- Name: login_session login_session_node_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
--
-ALTER TABLE ONLY instance.ldap_role
- ADD CONSTRAINT ldap_role_ldap_id_fkey FOREIGN KEY (ldap_id) REFERENCES instance.ldap(id) ON UPDATE CASCADE ON DELETE CASCADE;
+ALTER TABLE ONLY instance.login_session
+ ADD CONSTRAINT login_session_node_id_fkey FOREIGN KEY (node_id) REFERENCES instance_cluster.node(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3714 (class 2606 OID 17758)
--- Name: ldap_role ldap_role_role_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
+-- TOC entry 4177 (class 2606 OID 18270)
+-- Name: login_setting login_setting_login_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
--
-ALTER TABLE ONLY instance.ldap_role
- ADD CONSTRAINT ldap_role_role_id_fkey FOREIGN KEY (role_id) REFERENCES app.role(id) ON UPDATE CASCADE ON DELETE CASCADE;
+ALTER TABLE ONLY instance.login_setting
+ ADD CONSTRAINT login_setting_login_id_fkey FOREIGN KEY (login_id) REFERENCES instance.login(id) ON UPDATE CASCADE ON DELETE CASCADE;
--
--- TOC entry 3715 (class 2606 OID 17871)
--- Name: log log_module_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
+-- TOC entry 4178 (class 2606 OID 18704)
+-- Name: login_setting login_setting_login_template_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
--
-ALTER TABLE ONLY instance.log
- ADD CONSTRAINT log_module_id_fkey FOREIGN KEY (module_id) REFERENCES app.module(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY instance.login_setting
+ ADD CONSTRAINT login_setting_login_template_id_fkey FOREIGN KEY (login_template_id) REFERENCES instance.login_template(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3716 (class 2606 OID 18493)
--- Name: log log_node_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
+-- TOC entry 4179 (class 2606 OID 18275)
+-- Name: login_token_fixed login_token_fixed_login_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
--
-ALTER TABLE ONLY instance.log
- ADD CONSTRAINT log_node_id_fkey FOREIGN KEY (node_id) REFERENCES instance_cluster.node(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY instance.login_token_fixed
+ ADD CONSTRAINT login_token_fixed_login_id_fkey FOREIGN KEY (login_id) REFERENCES instance.login(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3717 (class 2606 OID 17763)
--- Name: login login_ldap_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
+-- TOC entry 4210 (class 2606 OID 18998)
+-- Name: login_widget_group_item login_widget_group_item_login_widget_group_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
--
-ALTER TABLE ONLY instance.login
- ADD CONSTRAINT login_ldap_id_fkey FOREIGN KEY (ldap_id) REFERENCES instance.ldap(id) ON UPDATE CASCADE ON DELETE CASCADE;
+ALTER TABLE ONLY instance.login_widget_group_item
+ ADD CONSTRAINT login_widget_group_item_login_widget_group_id_fkey FOREIGN KEY (login_widget_group_id) REFERENCES instance.login_widget_group(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3718 (class 2606 OID 17768)
--- Name: login_role login_role_login_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
+-- TOC entry 4211 (class 2606 OID 19008)
+-- Name: login_widget_group_item login_widget_group_item_module_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
--
-ALTER TABLE ONLY instance.login_role
- ADD CONSTRAINT login_role_login_id_fkey FOREIGN KEY (login_id) REFERENCES instance.login(id) ON UPDATE CASCADE ON DELETE CASCADE;
+ALTER TABLE ONLY instance.login_widget_group_item
+ ADD CONSTRAINT login_widget_group_item_module_id_fkey FOREIGN KEY (module_id) REFERENCES app.module(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3719 (class 2606 OID 17773)
--- Name: login_role login_role_role_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
+-- TOC entry 4212 (class 2606 OID 19003)
+-- Name: login_widget_group_item login_widget_group_item_widget_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
--
-ALTER TABLE ONLY instance.login_role
- ADD CONSTRAINT login_role_role_id_fkey FOREIGN KEY (role_id) REFERENCES app.role(id) ON UPDATE CASCADE ON DELETE CASCADE;
+ALTER TABLE ONLY instance.login_widget_group_item
+ ADD CONSTRAINT login_widget_group_item_widget_id_fkey FOREIGN KEY (widget_id) REFERENCES app.widget(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3720 (class 2606 OID 17778)
--- Name: login_setting login_setting_login_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
+-- TOC entry 4209 (class 2606 OID 18983)
+-- Name: login_widget_group login_widget_group_login_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
--
-ALTER TABLE ONLY instance.login_setting
- ADD CONSTRAINT login_setting_login_id_fkey FOREIGN KEY (login_id) REFERENCES instance.login(id) ON UPDATE CASCADE ON DELETE CASCADE;
+ALTER TABLE ONLY instance.login_widget_group
+ ADD CONSTRAINT login_widget_group_login_id_fkey FOREIGN KEY (login_id) REFERENCES instance.login(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3721 (class 2606 OID 17783)
--- Name: login_token_fixed login_token_fixed_login_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
+-- TOC entry 4180 (class 2606 OID 19058)
+-- Name: mail_account mail_account_oauth_client_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
--
-ALTER TABLE ONLY instance.login_token_fixed
- ADD CONSTRAINT login_token_fixed_login_id_fkey FOREIGN KEY (login_id) REFERENCES instance.login(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+ALTER TABLE ONLY instance.mail_account
+ ADD CONSTRAINT mail_account_oauth_client_id_fkey FOREIGN KEY (oauth_client_id) REFERENCES instance.oauth_client(id) DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3722 (class 2606 OID 17788)
+-- TOC entry 4181 (class 2606 OID 18280)
-- Name: mail_spool mail_spool_attribute_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
--
ALTER TABLE ONLY instance.mail_spool
- ADD CONSTRAINT mail_spool_attribute_fkey FOREIGN KEY (attribute_id) REFERENCES app.attribute(id) ON UPDATE SET NULL ON DELETE SET NULL DEFERRABLE INITIALLY DEFERRED;
+ ADD CONSTRAINT mail_spool_attribute_fkey FOREIGN KEY (attribute_id) REFERENCES app.attribute(id) ON UPDATE SET NULL ON DELETE SET NULL DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3724 (class 2606 OID 17793)
+-- TOC entry 4183 (class 2606 OID 18285)
-- Name: mail_spool_file mail_spool_file_mail_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
--
ALTER TABLE ONLY instance.mail_spool_file
- ADD CONSTRAINT mail_spool_file_mail_fkey FOREIGN KEY (mail_id) REFERENCES instance.mail_spool(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+ ADD CONSTRAINT mail_spool_file_mail_fkey FOREIGN KEY (mail_id) REFERENCES instance.mail_spool(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3723 (class 2606 OID 17798)
+-- TOC entry 4182 (class 2606 OID 18290)
-- Name: mail_spool mail_spool_mail_account_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
--
ALTER TABLE ONLY instance.mail_spool
- ADD CONSTRAINT mail_spool_mail_account_fkey FOREIGN KEY (mail_account_id) REFERENCES instance.mail_account(id) ON UPDATE SET NULL ON DELETE SET NULL DEFERRABLE INITIALLY DEFERRED;
+ ADD CONSTRAINT mail_spool_mail_account_fkey FOREIGN KEY (mail_account_id) REFERENCES instance.mail_account(id) ON UPDATE SET NULL ON DELETE SET NULL DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4206 (class 2606 OID 18922)
+-- Name: mail_traffic mail_traffic_mail_account_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
+--
+
+ALTER TABLE ONLY instance.mail_traffic
+ ADD CONSTRAINT mail_traffic_mail_account_fkey FOREIGN KEY (mail_account_id) REFERENCES instance.mail_account(id) ON UPDATE SET NULL ON DELETE SET NULL DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3725 (class 2606 OID 17803)
--- Name: module_option module_option_module_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
+-- TOC entry 4184 (class 2606 OID 18295)
+-- Name: module_meta module_option_module_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
--
-ALTER TABLE ONLY instance.module_option
- ADD CONSTRAINT module_option_module_id_fkey FOREIGN KEY (module_id) REFERENCES app.module(id) ON UPDATE CASCADE ON DELETE CASCADE;
+ALTER TABLE ONLY instance.module_meta
+ ADD CONSTRAINT module_option_module_id_fkey FOREIGN KEY (module_id) REFERENCES app.module(id) ON UPDATE CASCADE ON DELETE CASCADE;
--
--- TOC entry 3726 (class 2606 OID 17808)
+-- TOC entry 4185 (class 2606 OID 18300)
-- Name: preset_record preset_record_preset_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
--
ALTER TABLE ONLY instance.preset_record
- ADD CONSTRAINT preset_record_preset_id_fkey FOREIGN KEY (preset_id) REFERENCES app.preset(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+ ADD CONSTRAINT preset_record_preset_id_fkey FOREIGN KEY (preset_id) REFERENCES app.preset(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4203 (class 2606 OID 18819)
+-- Name: pwa_domain pwa_domain_module_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
+--
+
+ALTER TABLE ONLY instance.pwa_domain
+ ADD CONSTRAINT pwa_domain_module_id_fkey FOREIGN KEY (module_id) REFERENCES app.module(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4202 (class 2606 OID 18791)
+-- Name: rest_spool rest_spool_pg_function_id_callback_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
+--
+
+ALTER TABLE ONLY instance.rest_spool
+ ADD CONSTRAINT rest_spool_pg_function_id_callback_fkey FOREIGN KEY (pg_function_id_callback) REFERENCES app.pg_function(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3728 (class 2606 OID 17851)
+-- TOC entry 4186 (class 2606 OID 18305)
-- Name: schedule scheduler_pg_function_schedule_id_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
--
ALTER TABLE ONLY instance.schedule
- ADD CONSTRAINT scheduler_pg_function_schedule_id_fkey FOREIGN KEY (pg_function_schedule_id) REFERENCES app.pg_function_schedule(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+ ADD CONSTRAINT scheduler_pg_function_schedule_id_fkey FOREIGN KEY (pg_function_schedule_id) REFERENCES app.pg_function_schedule(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3727 (class 2606 OID 17818)
+-- TOC entry 4187 (class 2606 OID 18310)
-- Name: schedule scheduler_task_name_fkey; Type: FK CONSTRAINT; Schema: instance; Owner: -
--
ALTER TABLE ONLY instance.schedule
- ADD CONSTRAINT scheduler_task_name_fkey FOREIGN KEY (task_name) REFERENCES instance.task(name) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+ ADD CONSTRAINT scheduler_task_name_fkey FOREIGN KEY (task_name) REFERENCES instance.task(name) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3770 (class 2606 OID 18485)
+-- TOC entry 4188 (class 2606 OID 18315)
-- Name: node_event node_event_node_id_fkey; Type: FK CONSTRAINT; Schema: instance_cluster; Owner: -
--
ALTER TABLE ONLY instance_cluster.node_event
- ADD CONSTRAINT node_event_node_id_fkey FOREIGN KEY (node_id) REFERENCES instance_cluster.node(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+ ADD CONSTRAINT node_event_node_id_fkey FOREIGN KEY (node_id) REFERENCES instance_cluster.node(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3771 (class 2606 OID 18516)
+-- TOC entry 4189 (class 2606 OID 18320)
-- Name: node_schedule node_schedule_node_id_fkey; Type: FK CONSTRAINT; Schema: instance_cluster; Owner: -
--
ALTER TABLE ONLY instance_cluster.node_schedule
- ADD CONSTRAINT node_schedule_node_id_fkey FOREIGN KEY (node_id) REFERENCES instance_cluster.node(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+ ADD CONSTRAINT node_schedule_node_id_fkey FOREIGN KEY (node_id) REFERENCES instance_cluster.node(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
--
--- TOC entry 3772 (class 2606 OID 18521)
+-- TOC entry 4190 (class 2606 OID 18325)
-- Name: node_schedule node_schedule_schedule_id_fkey; Type: FK CONSTRAINT; Schema: instance_cluster; Owner: -
--
ALTER TABLE ONLY instance_cluster.node_schedule
- ADD CONSTRAINT node_schedule_schedule_id_fkey FOREIGN KEY (schedule_id) REFERENCES instance.schedule(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+ ADD CONSTRAINT node_schedule_schedule_id_fkey FOREIGN KEY (schedule_id) REFERENCES instance.schedule(id) ON UPDATE CASCADE ON DELETE CASCADE DEFERRABLE INITIALLY DEFERRED;
+
+
+--
+-- TOC entry 4390 (class 0 OID 0)
+-- Dependencies: 4
+-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: -
+--
+
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
+GRANT ALL ON SCHEMA public TO PUBLIC;
--- Completed on 2022-07-12 12:35:48
+-- Completed on 2025-02-05 11:27:36
--
-- PostgreSQL database dump complete
diff --git a/db/upgrade/upgrade.go b/db/upgrade/upgrade.go
index 988557ee..938fec77 100644
--- a/db/upgrade/upgrade.go
+++ b/db/upgrade/upgrade.go
@@ -1,6 +1,7 @@
package upgrade
import (
+ "context"
"fmt"
"os"
"path/filepath"
@@ -23,8 +24,7 @@ import (
// DB version is related to major+minor application version (e. g. app: 1.3.2.1999 -> 1.3)
// -> DB changes are therefore exclusive to major or minor releases
func RunIfRequired() error {
- _, appVersionCut, _, dbVersionCut := config.GetAppVersions()
- if appVersionCut == dbVersionCut {
+ if config.GetAppVersion().Cut == config.GetDbVersionCut() {
return nil
}
@@ -33,38 +33,21 @@ func RunIfRequired() error {
}
// reload config store, in case upgrade changed it
- if err := config.LoadFromDb(); err != nil {
- return err
- }
- return nil
+ return config.LoadFromDb()
}
// loop upgrade procedure until DB version matches application version
func startLoop() error {
-
log.Info("server", "version discrepancy (platform<->database) recognized, starting automatic upgrade")
for {
- // get version info
- _, appVersionCut, _, dbVersionCut := config.GetAppVersions()
-
// abort when versions match
- if appVersionCut == dbVersionCut {
+ if config.GetAppVersion().Cut == config.GetDbVersionCut() {
log.Info("server", "version discrepancy has been resolved")
return nil
}
- tx, err := db.Pool.Begin(db.Ctx)
- if err != nil {
- return err
- }
-
- if err := oneIteration(tx, dbVersionCut); err != nil {
- tx.Rollback(db.Ctx)
- return err
- }
-
- if err := tx.Commit(db.Ctx); err != nil {
+ if err := oneIteration(config.GetDbVersionCut()); err != nil {
return err
}
log.Info("server", "upgrade successful")
@@ -72,7 +55,15 @@ func startLoop() error {
return nil
}
-func oneIteration(tx pgx.Tx, dbVersionCut string) error {
+func oneIteration(dbVersionCut string) error {
+ ctx, ctxCanc := context.WithTimeout(context.Background(), db.CtxDefTimeoutDbTask)
+ defer ctxCanc()
+
+ tx, err := db.Pool.Begin(ctx)
+ if err != nil {
+ return err
+ }
+ defer tx.Rollback(ctx)
// log before upgrade because changes to log table index
// caused infinite lock when trying to log to DB afterwards
@@ -84,26 +75,1921 @@ func oneIteration(tx pgx.Tx, dbVersionCut string) error {
return fmt.Errorf("DB version '%s' not recognized, platform update required",
dbVersionCut)
}
- dbVersionCutNew, err := upgradeFunctions[dbVersionCut](tx)
+ dbVersionCutNew, err := upgradeFunctions[dbVersionCut](ctx, tx)
if err != nil {
log.Error("server", "upgrade NOT successful", err)
return err
}
// update database version
- return config.SetString_tx(tx, "dbVersionCut", dbVersionCutNew)
+ if err := config.SetString_tx(ctx, tx, "dbVersionCut", dbVersionCutNew); err != nil {
+ return err
+ }
+ return tx.Commit(ctx)
}
// upgrade functions for database
// mapped by current database version string, returns new database version string
-var upgradeFunctions = map[string]func(tx pgx.Tx) (string, error){
+var upgradeFunctions = map[string]func(ctx context.Context, tx pgx.Tx) (string, error){
// clean up on next release
- // ALTER TABLE app.attribute ALTER COLUMN content_use
- // TYPE app.attribute_content_use USING content_use::text::app.attribute_content_use;
+ /*
+ ALTER TABLE app.field ALTER COLUMN flags
+ TYPE app.field_flag[] USING flags::CHARACTER VARYING(12)[]::app.field_flag[];
+
+ ALTER TABLE app.collection_consumer ALTER COLUMN flags
+ TYPE app.collection_consumer_flag[] USING flags::CHARACTER VARYING(24)[]::app.collection_consumer_flag[];
+
+ ALTER TABLE instance.login_setting ALTER COLUMN form_actions_align
+ TYPE instance.align_horizontal USING form_actions_align::TEXT::instance.align_horizontal;
+
+ ALTER TABLE app.menu ALTER COLUMN menu_tab_id SET NOT NULL;
+ ALTER TABLE app.menu DROP COLUMN module_id;
+
+ ALTER TABLE app.collection_consumer DROP COLUMN multi_value;
+ ALTER TABLE app.collection_consumer DROP COLUMN no_display_empty;
+
+ -- fix bad upgrade script (column style 'monospace' was wrongly added in '3.8->3.9' script instead of '3.9->3.10' - some 3.10 instances do not have it)
+ -- remove temporary fix in initSystem() (in r3.go) when 3.11 releases
+ ALTER table app.column ALTER COLUMN styles TYPE TEXT[];
+ DROP TYPE app.column_style;
+ CREATE TYPE app.column_style AS ENUM ('bold', 'italic', 'alignEnd', 'alignMid', 'clipboard', 'hide', 'vertical', 'wrap', 'monospace', 'previewLarge', 'boolAtrIcon');
+ ALTER TABLE app.column ALTER COLUMN styles TYPE app.column_style[] USING styles::TEXT[]::app.column_style[];
+ */
+
+ "3.9": func(ctx context.Context, tx pgx.Tx) (string, error) {
+ _, err := tx.Exec(ctx, `
+ -- cleanup from last release
+ ALTER TABLE app.pg_function ALTER COLUMN volatility DROP DEFAULT;
+
+ ALTER TABLE app.pg_function ALTER volatility
+ TYPE app.pg_function_volatility USING volatility::TEXT::app.pg_function_volatility;
+
+ -- join query filter
+ ALTER TABLE app.query ADD COLUMN query_filter_index SMALLINT;
+ ALTER TABLE app.query DROP CONSTRAINT query_filter_subquery_fkey;
+
+ ALTER TABLE app.query_filter_side ADD COLUMN query_filter_index SMALLINT NOT NULL DEFAULT 0;
+ ALTER TABLE app.query_filter_side ALTER COLUMN query_filter_index DROP DEFAULT;
+ ALTER TABLE app.query_filter_side DROP CONSTRAINT query_filter_side_query_id_query_filter_position_fkey;
+ ALTER TABLE app.query_filter_side DROP CONSTRAINT query_filter_side_pkey;
+ ALTER TABLE app.query_filter_side ADD CONSTRAINT query_filter_side_pkey PRIMARY KEY (query_id, query_filter_index, query_filter_position, side);
+
+ ALTER TABLE app.query_filter ADD COLUMN index SMALLINT NOT NULL DEFAULT 0;
+ ALTER TABLE app.query_filter ALTER COLUMN index DROP DEFAULT;
+ ALTER TABLE app.query_filter DROP CONSTRAINT query_filter_pkey;
+ ALTER TABLE app.query_filter ADD CONSTRAINT query_filter_pkey PRIMARY KEY (query_id, "index", "position");
+
+ ALTER TABLE app.query_filter_side ADD CONSTRAINT query_filter_side_query_filter_fkey FOREIGN KEY (query_id, query_filter_index, query_filter_position)
+ REFERENCES app.query_filter (query_id, "index", "position") MATCH SIMPLE
+ ON UPDATE CASCADE
+ ON DELETE CASCADE
+ DEFERRABLE INITIALLY DEFERRED;
+
+ ALTER TABLE app.query ADD CONSTRAINT query_filter_subquery_fkey FOREIGN KEY (query_filter_query_id, query_filter_index, query_filter_position, query_filter_side)
+ REFERENCES app.query_filter_side (query_id, query_filter_index, query_filter_position, side) MATCH SIMPLE
+ ON UPDATE CASCADE
+ ON DELETE CASCADE
+ DEFERRABLE INITIALLY DEFERRED;
+
+ UPDATE app.query
+ SET query_filter_index = 0
+ WHERE query_filter_position IS NOT NULL;
+
+ -- field flags
+ CREATE TYPE app.field_flag AS ENUM ('alignEnd','hideInputs','monospace');
+ ALTER TABLE app.field ADD COLUMN flags TEXT[] NOT NULL DEFAULT '{}';
+ ALTER TABLE app.field ALTER COLUMN flags DROP DEFAULT;
+
+ -- collection consumer flags
+ CREATE TYPE app.collection_consumer_flag AS ENUM ('multiValue','noDisplayEmpty','showRowCount');
+ ALTER TABLE app.collection_consumer ALTER COLUMN multi_value DROP NOT NULL;
+ ALTER TABLE app.collection_consumer ALTER COLUMN no_display_empty DROP NOT NULL;
+ ALTER TABLE app.collection_consumer ADD COLUMN flags TEXT[] NOT NULL DEFAULT '{}';
+ ALTER TABLE app.collection_consumer ALTER COLUMN flags DROP DEFAULT;
+
+ UPDATE app.collection_consumer SET flags = ARRAY_APPEND(flags, 'multiValue') WHERE multi_value;
+ UPDATE app.collection_consumer SET flags = ARRAY_APPEND(flags, 'noDisplayEmpty') WHERE no_display_empty;
+
+ -- make column styles not nullable
+ UPDATE app.column SET styles = '{}' WHERE styles IS NULL;
+ ALTER TABLE app.column ALTER COLUMN styles SET NOT NULL;
+
+ -- barcode attribute use
+ ALTER TYPE app.attribute_content_use ADD VALUE 'barcode';
+
+ -- new filter side content
+ ALTER TYPE app.filter_side_content ADD VALUE 'getter';
+
+ -- menu tabs
+ CREATE TABLE IF NOT EXISTS app.menu_tab(
+ id uuid NOT NULL,
+ module_id uuid NOT NULL,
+ icon_id uuid,
+ "position" integer NOT NULL,
+ CONSTRAINT menu_tab_pkey PRIMARY KEY (id),
+ CONSTRAINT menu_tab_module_id_fkey FOREIGN KEY (module_id)
+ REFERENCES app.module (id) MATCH SIMPLE
+ ON UPDATE CASCADE
+ ON DELETE CASCADE
+ DEFERRABLE INITIALLY DEFERRED,
+ CONSTRAINT menu_tab_icon_id_fkey FOREIGN KEY (icon_id)
+ REFERENCES app.icon (id) MATCH SIMPLE
+ ON UPDATE NO ACTION
+ ON DELETE NO ACTION
+ DEFERRABLE INITIALLY DEFERRED
+ );
+
+ CREATE INDEX IF NOT EXISTS fki_menu_tab_icon_id_fkey
+ ON app.menu_tab USING btree (icon_id ASC NULLS LAST);
+
+ CREATE INDEX IF NOT EXISTS fki_menu_tab_module_id_fkey
+ ON app.menu_tab USING btree (module_id ASC NULLS LAST);
+
+ -- menu tab captions
+ ALTER TYPE app.caption_content ADD VALUE 'menuTabTitle';
+
+ ALTER TABLE app.caption ADD COLUMN menu_tab_id uuid;
+ ALTER TABLE app.caption ADD CONSTRAINT caption_menu_tab_id_fkey FOREIGN KEY (menu_tab_id)
+ REFERENCES app.menu_tab (id) MATCH SIMPLE
+ ON UPDATE CASCADE
+ ON DELETE CASCADE
+ DEFERRABLE INITIALLY DEFERRED;
+
+ CREATE INDEX fki_caption_menu_tab_id_fkey ON app.caption USING BTREE (menu_tab_id ASC NULLS LAST);
+
+ ALTER TABLE instance.caption ADD COLUMN menu_tab_id uuid;
+ ALTER TABLE instance.caption ADD CONSTRAINT caption_menu_tab_id_fkey FOREIGN KEY (menu_tab_id)
+ REFERENCES app.menu_tab (id) MATCH SIMPLE
+ ON UPDATE CASCADE
+ ON DELETE CASCADE
+ DEFERRABLE INITIALLY DEFERRED;
+
+ CREATE INDEX fki_caption_menu_tab_id_fkey ON instance.caption USING BTREE (menu_tab_id ASC NULLS LAST);
+
+ -- generate first menu tab
+ INSERT INTO app.menu_tab (id, module_id, position)
+ SELECT gen_random_uuid(), id, 0 FROM app.module;
+
+ -- menu assocation with tabs
+ ALTER TABLE app.menu ALTER COLUMN module_id DROP NOT NULL;
+ ALTER TABLE app.menu ADD COLUMN menu_tab_id UUID;
+ ALTER TABLE app.menu ADD CONSTRAINT menu_menu_tab_id_fkey FOREIGN KEY (menu_tab_id)
+ REFERENCES app.menu_tab (id) MATCH SIMPLE
+ ON UPDATE CASCADE
+ ON DELETE CASCADE
+ DEFERRABLE INITIALLY DEFERRED;
+
+ UPDATE app.menu AS m
+ SET menu_tab_id = (
+ SELECT id
+ FROM app.menu_tab
+ WHERE module_id = m.module_id
+ );
+
+ -- form state as form state condition
+ ALTER TABLE app.form_state_condition_side ADD COLUMN form_state_id_result UUID;
+ ALTER TABLE app.form_state_condition_side ADD CONSTRAINT form_state_condition_side_form_state_id_result_fkey FOREIGN KEY (form_state_id_result)
+ REFERENCES app.form_state (id) MATCH SIMPLE
+ ON UPDATE NO ACTION
+ ON DELETE NO ACTION
+ DEFERRABLE INITIALLY DEFERRED;
+
+ CREATE INDEX IF NOT EXISTS fki_form_state_condition_side_form_state_id_result_fkey
+ ON app.form_state_condition_side USING btree (form_state_id_result ASC NULLS LAST);
+
+ ALTER TYPE app.filter_side_content ADD VALUE 'formState';
+
+ -- persistent login config
+ ALTER TABLE instance.login ADD COLUMN date_favorites BIGINT NOT NULL DEFAULT 0;
+ ALTER TABLE instance.login ALTER COLUMN date_favorites DROP DEFAULT;
+
+ -- login favorites
+ CREATE TABLE instance.login_favorite (
+ id uuid NOT NULL,
+ login_id integer NOT NULL,
+ module_id uuid NOT NULL,
+ form_id uuid NOT NULL,
+ record_id bigint,
+ title character varying(128),
+ "position" smallint NOT NULL,
+ CONSTRAINT login_favorite_pkey PRIMARY KEY (id),
+ CONSTRAINT login_favorite_login_id_fkey FOREIGN KEY (login_id)
+ REFERENCES instance.login (id) MATCH SIMPLE
+ ON UPDATE CASCADE
+ ON DELETE CASCADE
+ DEFERRABLE INITIALLY DEFERRED,
+ CONSTRAINT login_favorite_module_id_fkey FOREIGN KEY (module_id)
+ REFERENCES app.module (id) MATCH SIMPLE
+ ON UPDATE CASCADE
+ ON DELETE CASCADE
+ DEFERRABLE INITIALLY DEFERRED,
+ CONSTRAINT login_favorite_form_id_fkey FOREIGN KEY (form_id)
+ REFERENCES app.form (id) MATCH SIMPLE
+ ON UPDATE CASCADE
+ ON DELETE CASCADE
+ DEFERRABLE INITIALLY DEFERRED
+ );
+ CREATE INDEX fki_login_favorite_login_id_fkey ON instance.login_favorite USING BTREE (login_id ASC NULLS LAST);
+ CREATE INDEX fki_login_favorite_module_id_fkey ON instance.login_favorite USING BTREE (module_id ASC NULLS LAST);
+ CREATE INDEX fki_login_favorite_form_id_fkey ON instance.login_favorite USING BTREE (form_id ASC NULLS LAST);
+
+ -- login options
+ CREATE TABLE IF NOT EXISTS instance.login_options (
+ login_id integer NOT NULL,
+ login_favorite_id uuid,
+ field_id uuid NOT NULL,
+ is_mobile boolean NOT NULL,
+ date_change bigint NOT NULL,
+ options text COLLATE pg_catalog."default" NOT NULL,
+ CONSTRAINT login_options_field_id_fkey FOREIGN KEY (field_id)
+ REFERENCES app.field (id) MATCH SIMPLE
+ ON UPDATE CASCADE
+ ON DELETE CASCADE
+ DEFERRABLE INITIALLY DEFERRED,
+ CONSTRAINT login_options_login_favorite_id_fkey FOREIGN KEY (login_favorite_id)
+ REFERENCES instance.login_favorite (id) MATCH SIMPLE
+ ON UPDATE CASCADE
+ ON DELETE CASCADE
+ DEFERRABLE INITIALLY DEFERRED
+ NOT VALID,
+ CONSTRAINT login_options_login_id_fkey FOREIGN KEY (login_id)
+ REFERENCES instance.login (id) MATCH SIMPLE
+ ON UPDATE CASCADE
+ ON DELETE CASCADE
+ DEFERRABLE INITIALLY DEFERRED
+ );
+ CREATE INDEX fki_login_options_login_id_fkey ON instance.login_options USING BTREE (login_id ASC NULLS LAST);
+ CREATE INDEX fki_login_options_login_favorite_id_fkey ON instance.login_options USING BTREE (login_favorite_id ASC NULLS LAST);
+ CREATE INDEX fki_login_options_field_id_fkey ON instance.login_options USING BTREE (field_id ASC NULLS LAST);
+ CREATE UNIQUE INDEX ind_login_options_unique ON instance.login_options USING BTREE (
+ login_id ASC NULLS LAST,
+ COALESCE(login_favorite_id,'00000000-0000-0000-0000-000000000000') ASC NULLS LAST,
+ field_id ASC NULLS LAST,
+ is_mobile ASC NULLS LAST
+ );
+
+ -- new login settings
+ CREATE TYPE instance.align_horizontal AS ENUM ('left', 'center', 'right');
+ ALTER TABLE instance.login_setting ADD COLUMN form_actions_align TEXT NOT NULL DEFAULT 'center';
+ ALTER TABLE instance.login_setting ALTER COLUMN form_actions_align DROP DEFAULT;
+
+ ALTER TABLE instance.login_setting ADD COLUMN shadows_inputs BOOLEAN NOT NULL DEFAULT TRUE;
+ ALTER TABLE instance.login_setting ALTER COLUMN shadows_inputs DROP DEFAULT;
+
+ -- remove login setting
+ ALTER TABLE instance.login_setting DROP COLUMN borders_all;
+
+ -- new login session function
+ ALTER TABLE app.module
+ ADD COLUMN js_function_id_on_login UUID,
+ ADD CONSTRAINT js_function_id_on_login_fkey FOREIGN KEY (js_function_id_on_login)
+ REFERENCES app.js_function (id) MATCH SIMPLE
+ ON UPDATE NO ACTION
+ ON DELETE NO ACTION
+ DEFERRABLE INITIALLY DEFERRED;
+
+ CREATE INDEX IF NOT EXISTS fki_js_function_id_on_login_fkey ON app.module USING btree (js_function_id_on_login ASC NULLS LAST);
+
+ -- file_unlink() instance function
+ CREATE OR REPLACE FUNCTION instance.file_unlink(
+ file_id uuid,
+ attribute_id uuid,
+ record_id bigint)
+ RETURNS void
+ LANGUAGE 'plpgsql'
+ VOLATILE PARALLEL UNSAFE
+ AS $BODY$
+ DECLARE
+ BEGIN
+ EXECUTE FORMAT(
+ 'DELETE FROM instance_file.%I
+ WHERE file_id = $1
+ AND record_id = $2',
+ CONCAT(attribute_id::TEXT, '_record')
+ ) USING file_id, record_id;
+ END;
+ $BODY$;
+
+ -- regex operators
+ ALTER TYPE app.condition_operator ADD VALUE '~';
+ ALTER TYPE app.condition_operator ADD VALUE '~*';
+ ALTER TYPE app.condition_operator ADD VALUE '!~';
+ ALTER TYPE app.condition_operator ADD VALUE '!~*';
+
+ -- form state effects for data handling
+ ALTER TABLE app.form_state_effect ADD COLUMN new_data SMALLINT NOT NULL DEFAULT 0;
+ ALTER TABLE app.form_state_effect ALTER COLUMN new_data DROP DEFAULT;
+
+ -- new display type
+ ALTER TYPE app.data_display ADD VALUE 'rating';
+
+ -- new column styles
+ ALTER TYPE app.column_style ADD VALUE 'previewLarge';
+ ALTER TYPE app.column_style ADD VALUE 'boolAtrIcon';
+ ALTER TYPE app.column_style ADD VALUE 'monospace';
+
+ -- default values for variables
+ ALTER TABLE app.variable ADD COLUMN def TEXT;
+
+ -- fix login foreign key
+ ALTER TABLE instance.login
+ DROP CONSTRAINT login_ldap_id_fkey,
+ ADD CONSTRAINT login_ldap_id_fkey FOREIGN KEY (ldap_id)
+ REFERENCES instance.ldap (id) MATCH SIMPLE
+ ON UPDATE NO ACTION
+ ON DELETE NO ACTION;
+
+ -- fix wrong data type for function argument
+ DROP FUNCTION instance.mail_delete_after_attach;
+ CREATE FUNCTION instance.mail_delete_after_attach(
+ mail_id integer,
+ attach_record_id bigint,
+ attach_attribute_id uuid)
+ RETURNS integer
+ LANGUAGE 'plpgsql'
+ AS $BODY$
+ DECLARE
+ BEGIN
+ UPDATE instance.mail_spool SET
+ record_id_wofk = attach_record_id,
+ attribute_id = attach_attribute_id
+ WHERE id = mail_id
+ AND outgoing = FALSE;
+
+ RETURN 0;
+ END;
+ $BODY$;
+ `)
+ return "3.10", err
+ },
+ "3.8": func(ctx context.Context, tx pgx.Tx) (string, error) {
+ _, err := tx.Exec(ctx, `
+ -- cleanup from last release
+ ALTER TABLE app.column
+ DROP COLUMN batch_vertical,
+ DROP COLUMN clipboard,
+ DROP COLUMN wrap;
+
+ ALTER TABLE app.column ALTER COLUMN styles
+ TYPE app.column_style[] USING styles::CHARACTER VARYING(12)[]::app.column_style[];
+
+ -- limited logins
+ ALTER TABLE instance.login DROP COLUMN date_auth_last;
+ ALTER TABLE instance.login ADD COLUMN limited BOOL NOT NULL DEFAULT FALSE;
+ ALTER TABLE instance.login ALTER COLUMN limited DROP DEFAULT;
+
+ UPDATE instance.login AS l
+ SET limited = TRUE
+ WHERE ((
+ SELECT COUNT(*)
+ FROM instance.login_role
+ WHERE login_id = l.id
+ ) < 2)
+ AND admin = FALSE
+ AND no_auth = FALSE;
+
+ -- new login session managements
+ CREATE TYPE instance.login_session_device AS ENUM ('browser','fatClient');
+
+ CREATE TABLE IF NOT EXISTS instance.login_session (
+ id UUID NOT NULL,
+ device instance.login_session_device NOT NULL,
+ login_id INTEGER NOT NULL,
+ node_id UUID NOT NULL,
+ date BIGINT NOT NULL,
+ address TEXT NOT NULL,
+ CONSTRAINT login_session_pkey PRIMARY KEY (id),
+ CONSTRAINT login_session_login_id_fkey FOREIGN KEY (login_id)
+ REFERENCES instance.login (id) MATCH SIMPLE
+ ON UPDATE CASCADE
+ ON DELETE CASCADE
+ DEFERRABLE INITIALLY DEFERRED,
+ CONSTRAINT login_session_node_id_fkey FOREIGN KEY (node_id)
+ REFERENCES instance_cluster.node (id) MATCH SIMPLE
+ ON UPDATE CASCADE
+ ON DELETE CASCADE
+ DEFERRABLE INITIALLY DEFERRED
+ );
+ CREATE INDEX IF NOT EXISTS fki_login_session_login_id_fkey ON instance.login_session USING btree (login_id ASC NULLS LAST);
+ CREATE INDEX IF NOT EXISTS fki_login_session_node_id_fkey ON instance.login_session USING btree (node_id ASC NULLS LAST);
+ CREATE INDEX IF NOT EXISTS fki_login_session_date ON instance.login_session USING btree (date ASC NULLS LAST);
+
+ ALTER TABLE instance_cluster.node DROP COLUMN stat_sessions;
+
+ -- login sync
+ CREATE TABLE IF NOT EXISTS instance.login_meta (
+ login_id integer NOT NULL,
+ organization character varying(512) COLLATE pg_catalog."default",
+ location character varying(512) COLLATE pg_catalog."default",
+ department character varying(512) COLLATE pg_catalog."default",
+ email character varying(512) COLLATE pg_catalog."default",
+ phone_mobile character varying(512) COLLATE pg_catalog."default",
+ phone_landline character varying(512) COLLATE pg_catalog."default",
+ phone_fax character varying(512) COLLATE pg_catalog."default",
+ notes character varying(8196) COLLATE pg_catalog."default",
+ name_fore character varying(512) COLLATE pg_catalog."default",
+ name_sur character varying(512) COLLATE pg_catalog."default",
+ name_display character varying(512) COLLATE pg_catalog."default",
+ CONSTRAINT login_meta_pkey PRIMARY KEY (login_id),
+ CONSTRAINT login_meta_login_id_fkey FOREIGN KEY (login_id)
+ REFERENCES instance.login (id) MATCH SIMPLE
+ ON UPDATE CASCADE
+ ON DELETE CASCADE
+ DEFERRABLE INITIALLY DEFERRED
+ );
+
+ CREATE TYPE instance.user_data AS (
+ -- login
+ id INTEGER,
+ is_active BOOLEAN,
+ is_admin BOOLEAN,
+ is_limited BOOLEAN,
+ is_public BOOLEAN,
+ username character varying(128),
+
+ -- meta
+ department character varying(512),
+ email character varying(512),
+ location character varying(512),
+ name_display character varying(512),
+ name_fore character varying(512),
+ name_sur character varying(512),
+ notes character varying(8196),
+ organization character varying(512),
+ phone_fax character varying(512),
+ phone_landline character varying(512),
+ phone_mobile character varying(512)
+ );
+
+ ALTER TABLE app.module
+ ADD COLUMN pg_function_id_login_sync UUID,
+ ADD CONSTRAINT pg_function_id_login_sync_fkey FOREIGN KEY (pg_function_id_login_sync)
+ REFERENCES app.pg_function (id) MATCH SIMPLE
+ ON UPDATE NO ACTION
+ ON DELETE NO ACTION
+ DEFERRABLE INITIALLY DEFERRED;
+
+ CREATE INDEX IF NOT EXISTS fki_pg_function_id_login_sync_fkey ON app.module USING btree (pg_function_id_login_sync ASC NULLS LAST);
+
+ ALTER TABLE app.pg_function ADD COLUMN is_login_sync BOOL NOT NULL DEFAULT FALSE;
+ ALTER TABLE app.pg_function ALTER COLUMN is_login_sync DROP DEFAULT;
+
+ -- login sync LDAP attributes
+ CREATE TABLE IF NOT EXISTS instance.ldap_attribute_login_meta (
+ ldap_id integer NOT NULL,
+ department TEXT,
+ email TEXT,
+ location TEXT,
+ name_display TEXT,
+ name_fore TEXT,
+ name_sur TEXT,
+ notes TEXT,
+ organization TEXT,
+ phone_fax TEXT,
+ phone_landline TEXT,
+ phone_mobile TEXT,
+ CONSTRAINT ldap_attribute_login_meta_pkey PRIMARY KEY (ldap_id),
+ CONSTRAINT ldap_attribute_login_meta_ldap_id_fkey FOREIGN KEY (ldap_id)
+ REFERENCES instance.ldap (id) MATCH SIMPLE
+ ON UPDATE CASCADE
+ ON DELETE CASCADE
+ DEFERRABLE INITIALLY DEFERRED
+ );
+
+ -- login sync instance functions
+ CREATE OR REPLACE FUNCTION instance.user_sync(
+ _module_name TEXT,
+ _pg_function_name TEXT,
+ _login_id INTEGER,
+ _event TEXT)
+ RETURNS void
+ LANGUAGE 'plpgsql'
+ AS $BODY$
+ DECLARE
+ _d instance.user_data;
+ _rec RECORD;
+ _sql TEXT;
+ BEGIN
+ IF _event <> 'DELETED' AND _event <> 'UPDATED' THEN
+ RETURN;
+ END IF;
+
+ _sql := FORMAT('SELECT "%s"."%s"($1,$2)', _module_name, _pg_function_name);
+
+ FOR _rec IN (
+ SELECT
+ l.id,
+ l.name,
+ l.active,
+ l.admin,
+ l.limited,
+ l.no_auth,
+ m.department,
+ m.email,
+ m.location,
+ m.name_display,
+ m.name_fore,
+ m.name_sur,
+ m.notes,
+ m.organization,
+ m.phone_fax,
+ m.phone_mobile,
+ m.phone_landline
+ FROM instance.login AS l
+ LEFT JOIN instance.login_meta AS m ON m.login_id = l.id
+ WHERE _login_id IS NULL
+ OR _login_id = l.id
+ ) LOOP
+ -- login
+ _d.id := _rec.id;
+ _d.username := _rec.name;
+ _d.is_active := _rec.active;
+ _d.is_admin := _rec.admin;
+ _d.is_limited := _rec.limited;
+ _d.is_public := _rec.no_auth;
+
+ -- meta
+ _d.department := COALESCE(_rec.department, '');
+ _d.email := COALESCE(_rec.email, '');
+ _d.location := COALESCE(_rec.location, '');
+ _d.name_display := COALESCE(_rec.name_display, '');
+ _d.name_fore := COALESCE(_rec.name_fore, '');
+ _d.name_sur := COALESCE(_rec.name_sur, '');
+ _d.notes := COALESCE(_rec.notes, '');
+ _d.organization := COALESCE(_rec.organization, '');
+ _d.phone_fax := COALESCE(_rec.phone_fax, '');
+ _d.phone_mobile := COALESCE(_rec.phone_mobile, '');
+ _d.phone_landline := COALESCE(_rec.phone_landline, '');
+
+ EXECUTE _sql USING _event, _d;
+ END LOOP;
+ END;
+ $BODY$;
+
+ CREATE OR REPLACE FUNCTION instance.user_sync_all(_module_id UUID)
+ RETURNS integer
+ LANGUAGE 'plpgsql'
+ AS $BODY$
+ DECLARE
+ _module_name TEXT;
+ _pg_function_name TEXT;
+ BEGIN
+ -- resolve entity names
+ SELECT
+ m.name, (
+ SELECT name
+ FROM app.pg_function
+ WHERE module_id = m.id
+ AND id = m.pg_function_id_login_sync
+ )
+ INTO
+ _module_name,
+ _pg_function_name
+ FROM app.module AS m
+ WHERE m.id = _module_id;
+
+ IF _module_name IS NULL OR _pg_function_name IS NULL THEN
+ RETURN 1;
+ END IF;
+
+ PERFORM instance.user_sync(
+ _module_name,
+ _pg_function_name,
+ NULL,
+ 'UPDATED'
+ );
+ RETURN 0;
+ END;
+ $BODY$;
+
+ CREATE OR REPLACE FUNCTION instance.user_meta_set(
+ _login_id INTEGER,
+ _department TEXT,
+ _email TEXT,
+ _location TEXT,
+ _name_display TEXT,
+ _name_fore TEXT,
+ _name_sur TEXT,
+ _notes TEXT,
+ _organization TEXT,
+ _phone_fax TEXT,
+ _phone_landline TEXT,
+ _phone_mobile TEXT)
+ RETURNS integer
+ LANGUAGE 'plpgsql'
+ AS $BODY$
+ DECLARE
+ BEGIN
+ IF (
+ SELECT id
+ FROM instance.login
+ WHERE id = _login_id
+ ) IS NULL THEN
+ RETURN 1;
+ END IF;
+
+ IF (
+ SELECT login_id
+ FROM instance.login_meta
+ WHERE login_id = _login_id
+ ) IS NULL THEN
+ INSERT INTO instance.login_meta (
+ login_id,
+ department,
+ email,
+ location,
+ name_display,
+ name_fore,
+ name_sur,
+ notes,
+ organization,
+ phone_fax,
+ phone_landline,
+ phone_mobile
+ )
+ VALUES (
+ _login_id,
+ COALESCE(_department, ''),
+ COALESCE(_email, ''),
+ COALESCE(_location, ''),
+ COALESCE(_name_display, ''),
+ COALESCE(_name_fore, ''),
+ COALESCE(_name_sur, ''),
+ COALESCE(_notes, ''),
+ COALESCE(_organization, ''),
+ COALESCE(_phone_fax, ''),
+ COALESCE(_phone_landline, ''),
+ COALESCE(_phone_mobile, '')
+ );
+ ELSE
+ UPDATE instance.login_meta
+ SET
+ department = COALESCE(_department, ''),
+ email = COALESCE(_email, ''),
+ location = COALESCE(_location, ''),
+ name_display = COALESCE(_name_display, ''),
+ name_fore = COALESCE(_name_fore, ''),
+ name_sur = COALESCE(_name_sur, ''),
+ notes = COALESCE(_notes, ''),
+ organization = COALESCE(_organization, ''),
+ phone_fax = COALESCE(_phone_fax, ''),
+ phone_landline = COALESCE(_phone_landline, ''),
+ phone_mobile = COALESCE(_phone_mobile, '')
+ WHERE login_id = _login_id;
+ END IF;
+
+ RETURN 0;
+ END;
+ $BODY$;
+
+ -- rename all public interfaces from 'login' to 'user'
+ ALTER TYPE instance.file_meta ADD ATTRIBUTE user_id_creator INTEGER;
+
+ CREATE OR REPLACE FUNCTION instance.files_get(
+ attribute_id uuid,
+ record_id bigint,
+ include_deleted boolean DEFAULT false)
+ RETURNS instance.file_meta[]
+ LANGUAGE 'plpgsql'
+ STABLE PARALLEL UNSAFE
+ AS $BODY$
+ DECLARE
+ file instance.file_meta;
+ files instance.file_meta[];
+ rec RECORD;
+ BEGIN
+ FOR rec IN
+ EXECUTE FORMAT('
+ SELECT r.file_id, r.name, v.login_id, v.hash, v.version, v.size_kb, v.date_change, r.date_delete
+ FROM instance_file.%I AS r
+ JOIN instance.file_version AS v
+ ON v.file_id = r.file_id
+ AND v.version = (
+ SELECT MAX(s.version)
+ FROM instance.file_version AS s
+ WHERE s.file_id = r.file_id
+ )
+ WHERE r.record_id = $1
+ AND ($2 OR r.date_delete IS NULL)
+ ', CONCAT(attribute_id::TEXT,'_record')) USING record_id, include_deleted
+ LOOP
+ file.id := rec.file_id;
+ file.login_id_creator := rec.login_id; -- for calls 0")
@@ -297,6 +310,10 @@ func Handler(w http.ResponseWriter, r *http.Request) {
Offset: getters.offset,
}
+ if api.Query.FixedLimit != 0 && api.Query.FixedLimit < dataGet.Limit {
+ dataGet.Limit = api.Query.FixedLimit
+ }
+
// abort if requested limit exceeds max limit
// better to abort as smaller than requested result count might suggest the absence of more data
if api.LimitMax < dataGet.Limit {
@@ -319,18 +336,19 @@ func Handler(w http.ResponseWriter, r *http.Request) {
// build expressions from columns
for _, column := range api.Columns {
- dataGet.Expressions = append(dataGet.Expressions,
- data_query.ConvertColumnToExpression(column, loginId, languageCode))
+ dataGet.Expressions = append(dataGet.Expressions, data_query.ConvertColumnToExpression(
+ column, loginId, languageCode, getters.filters))
}
- // apply query filters
+ // apply filters
dataGet.Filters = data_query.ConvertQueryToDataFilter(
- api.Query.Filters, loginId, languageCode)
+ api.Query.Filters, loginId, languageCode, getters.filters)
// add record filter
if recordId != 0 {
dataGet.Filters = append(dataGet.Filters, types.DataGetFilter{
Connector: "AND",
+ Index: 0,
Operator: "=",
Side0: types.DataGetFilterSide{
AttributeId: pgtype.UUID{
diff --git a/handler/api_auth/api_auth.go b/handler/api_auth/api_auth.go
index 3f00df87..5d4ad647 100644
--- a/handler/api_auth/api_auth.go
+++ b/handler/api_auth/api_auth.go
@@ -1,18 +1,21 @@
package api_auth
import (
+ "context"
"encoding/json"
"errors"
"fmt"
"net/http"
"r3/bruteforce"
+ "r3/config"
"r3/handler"
"r3/login/login_auth"
+ "time"
"github.com/jackc/pgx/v5/pgtype"
)
-var context = "api_auth"
+var logContext = "api_auth"
func Handler(w http.ResponseWriter, r *http.Request) {
@@ -24,7 +27,7 @@ func Handler(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
if r.Method != "POST" {
- handler.AbortRequestWithCode(w, context, http.StatusBadRequest,
+ handler.AbortRequestWithCode(w, logContext, http.StatusBadRequest,
errors.New("invalid HTTP method"), "invalid HTTP method, allowed: POST")
return
@@ -36,22 +39,27 @@ func Handler(w http.ResponseWriter, r *http.Request) {
Password string `json:"password"`
}
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
- handler.AbortRequestWithCode(w, context, http.StatusBadRequest,
+ handler.AbortRequestWithCode(w, logContext, http.StatusBadRequest,
err, "request body malformed")
return
}
+ ctx, ctxCanc := context.WithTimeout(context.Background(),
+ time.Duration(int64(config.GetUint64("dbTimeoutDataRest")))*time.Second)
+
+ defer ctxCanc()
+
// authenticate requestor
var loginId int64
var isAdmin bool
var noAuth bool
- token, _, mfaTokens, err := login_auth.User(req.Username, req.Password,
+ _, token, _, mfaTokens, err := login_auth.User(ctx, req.Username, req.Password,
pgtype.Int4{}, pgtype.Text{}, &loginId, &isAdmin, &noAuth)
if err != nil {
- handler.AbortRequestWithCode(w, context, http.StatusUnauthorized,
+ handler.AbortRequestWithCode(w, logContext, http.StatusUnauthorized,
err, handler.ErrAuthFailed)
bruteforce.BadAttempt(r)
@@ -59,7 +67,7 @@ func Handler(w http.ResponseWriter, r *http.Request) {
}
if len(mfaTokens) != 0 {
- handler.AbortRequestWithCode(w, context, http.StatusBadRequest,
+ handler.AbortRequestWithCode(w, logContext, http.StatusBadRequest,
nil, "failed to authenticate, MFA is currently not supported")
return
diff --git a/handler/cache_download/cache_download.go b/handler/cache_download/cache_download.go
index 507a513a..d0670494 100644
--- a/handler/cache_download/cache_download.go
+++ b/handler/cache_download/cache_download.go
@@ -5,15 +5,39 @@ import (
"fmt"
"net/http"
"r3/cache"
+ "r3/handler"
"time"
)
+var (
+ handlerContext = "cache_download"
+)
+
func Handler(w http.ResponseWriter, r *http.Request) {
- w.Header().Set("Content-Type", "application/json")
+ // parse getters
+ moduleId, err := handler.ReadUuidGetterFromUrl(r, "module_id")
+ if err != nil {
+ handler.AbortRequest(w, handlerContext, err, handler.ErrGeneral)
+ return
+ }
+ dateChange, err := handler.ReadInt64GetterFromUrl(r, "date")
+ if err != nil {
+ handler.AbortRequest(w, handlerContext, err, handler.ErrGeneral)
+ return
+ }
+
+ // load JSON cache for requested module
+ json, err := cache.GetModuleCacheJson(moduleId)
+ if err != nil {
+ handler.AbortRequest(w, handlerContext, err, handler.ErrGeneral)
+ return
+ }
+
+ w.Header().Set("Content-Type", "application/json")
http.ServeContent(
w, r,
- fmt.Sprintf("schema_%d.json", cache.GetSchemaTimestamp()),
- time.Unix(cache.GetSchemaTimestamp(), 0),
- bytes.NewReader(cache.GetSchemaCacheJson()))
+ fmt.Sprintf("schema_%s_%d.json", moduleId, dateChange),
+ time.Unix(dateChange, 0),
+ bytes.NewReader(json))
}
diff --git a/handler/client_download/client_download.go b/handler/client_download/client_download.go
index ae1d72ad..491c8b0b 100644
--- a/handler/client_download/client_download.go
+++ b/handler/client_download/client_download.go
@@ -1,14 +1,17 @@
package client_download
import (
+ "context"
"net/http"
"r3/bruteforce"
"r3/cache"
+ "r3/config"
"r3/handler"
"r3/login/login_auth"
+ "time"
)
-var context = "client_download"
+var logContext = "client_download"
func Handler(w http.ResponseWriter, r *http.Request) {
@@ -20,16 +23,21 @@ func Handler(w http.ResponseWriter, r *http.Request) {
// get authentication token
token, err := handler.ReadGetterFromUrl(r, "token")
if err != nil {
- handler.AbortRequest(w, context, err, handler.ErrGeneral)
+ handler.AbortRequest(w, logContext, err, handler.ErrGeneral)
return
}
+ ctx, ctxCanc := context.WithTimeout(context.Background(),
+ time.Duration(int64(config.GetUint64("dbTimeoutDataWs")))*time.Second)
+
+ defer ctxCanc()
+
// check token
var loginId int64
var admin bool
var noAuth bool
- if _, err := login_auth.Token(token, &loginId, &admin, &noAuth); err != nil {
- handler.AbortRequest(w, context, err, handler.ErrAuthFailed)
+ if _, _, err := login_auth.Token(ctx, token, &loginId, &admin, &noAuth); err != nil {
+ handler.AbortRequest(w, logContext, err, handler.ErrAuthFailed)
bruteforce.BadAttempt(r)
return
}
@@ -37,7 +45,7 @@ func Handler(w http.ResponseWriter, r *http.Request) {
// parse getters
requestedOs, err := handler.ReadGetterFromUrl(r, "os")
if err != nil {
- handler.AbortRequest(w, context, err, handler.ErrGeneral)
+ handler.AbortRequest(w, logContext, err, handler.ErrGeneral)
return
}
@@ -57,12 +65,12 @@ func Handler(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Disposition", "attachment; filename=r3_client.dmg")
_, err = w.Write(cache.Client_amd64_mac)
default:
- handler.AbortRequest(w, context, err, handler.ErrGeneral)
+ handler.AbortRequest(w, logContext, err, handler.ErrGeneral)
return
}
if err != nil {
- handler.AbortRequest(w, context, err, handler.ErrGeneral)
+ handler.AbortRequest(w, logContext, err, handler.ErrGeneral)
return
}
}
diff --git a/handler/client_download/client_download_config.go b/handler/client_download/client_download_config.go
index 44d60b38..51ac4aa0 100644
--- a/handler/client_download/client_download_config.go
+++ b/handler/client_download/client_download_config.go
@@ -1,12 +1,14 @@
package client_download
import (
+ "context"
"encoding/json"
"net/http"
"r3/bruteforce"
"r3/config"
"r3/handler"
"r3/login/login_auth"
+ "time"
"github.com/gofrs/uuid"
)
@@ -21,16 +23,21 @@ func HandlerConfig(w http.ResponseWriter, r *http.Request) {
// get authentication token
token, err := handler.ReadGetterFromUrl(r, "token")
if err != nil {
- handler.AbortRequest(w, context, err, handler.ErrGeneral)
+ handler.AbortRequest(w, logContext, err, handler.ErrGeneral)
return
}
+ ctx, ctxCanc := context.WithTimeout(context.Background(),
+ time.Duration(int64(config.GetUint64("dbTimeoutDataWs")))*time.Second)
+
+ defer ctxCanc()
+
// check token
var loginId int64
var admin bool
var noAuth bool
- if _, err := login_auth.Token(token, &loginId, &admin, &noAuth); err != nil {
- handler.AbortRequest(w, context, err, handler.ErrAuthFailed)
+ if _, _, err := login_auth.Token(ctx, token, &loginId, &admin, &noAuth); err != nil {
+ handler.AbortRequest(w, logContext, err, handler.ErrAuthFailed)
bruteforce.BadAttempt(r)
return
}
@@ -38,32 +45,32 @@ func HandlerConfig(w http.ResponseWriter, r *http.Request) {
// parse getters
tokenFixed, err := handler.ReadGetterFromUrl(r, "tokenFixed")
if err != nil {
- handler.AbortRequest(w, context, err, handler.ErrGeneral)
+ handler.AbortRequest(w, logContext, err, handler.ErrGeneral)
return
}
hostName, err := handler.ReadGetterFromUrl(r, "hostName")
if err != nil {
- handler.AbortRequest(w, context, err, handler.ErrGeneral)
+ handler.AbortRequest(w, logContext, err, handler.ErrGeneral)
return
}
hostPort, err := handler.ReadInt64GetterFromUrl(r, "hostPort")
if err != nil {
- handler.AbortRequest(w, context, err, handler.ErrGeneral)
+ handler.AbortRequest(w, logContext, err, handler.ErrGeneral)
return
}
languageCode, err := handler.ReadGetterFromUrl(r, "languageCode")
if err != nil {
- handler.AbortRequest(w, context, err, handler.ErrGeneral)
+ handler.AbortRequest(w, logContext, err, handler.ErrGeneral)
return
}
deviceName, err := handler.ReadGetterFromUrl(r, "deviceName")
if err != nil {
- handler.AbortRequest(w, context, err, handler.ErrGeneral)
+ handler.AbortRequest(w, logContext, err, handler.ErrGeneral)
return
}
ssl, err := handler.ReadInt64GetterFromUrl(r, "ssl")
if err != nil {
- handler.AbortRequest(w, context, err, handler.ErrGeneral)
+ handler.AbortRequest(w, logContext, err, handler.ErrGeneral)
return
}
@@ -109,12 +116,12 @@ func HandlerConfig(w http.ResponseWriter, r *http.Request) {
fJson, err := json.MarshalIndent(f, "", "\t")
if err != nil {
- handler.AbortRequest(w, context, err, handler.ErrGeneral)
+ handler.AbortRequest(w, logContext, err, handler.ErrGeneral)
return
}
if _, err := w.Write(fJson); err != nil {
- handler.AbortRequest(w, context, err, handler.ErrGeneral)
+ handler.AbortRequest(w, logContext, err, handler.ErrGeneral)
return
}
}
diff --git a/handler/csv_download/csv_download.go b/handler/csv_download/csv_download.go
index c924ddaf..19aa6c53 100644
--- a/handler/csv_download/csv_download.go
+++ b/handler/csv_download/csv_download.go
@@ -23,6 +23,7 @@ import (
"unicode/utf8"
"github.com/gofrs/uuid"
+ "github.com/jackc/pgx/v5/pgtype"
)
var (
@@ -155,11 +156,17 @@ func Handler(w http.ResponseWriter, r *http.Request) {
return
}
+ ctx, ctxCanc := context.WithTimeout(context.Background(),
+ time.Duration(int64(config.GetUint64("dbTimeoutCsv")))*time.Second)
+
+ defer ctxCanc()
+
// check token
var loginId int64
var admin bool
var noAuth bool
- if _, err := login_auth.Token(token, &loginId, &admin, &noAuth); err != nil {
+ _, languageCode, err := login_auth.Token(ctx, token, &loginId, &admin, &noAuth)
+ if err != nil {
handler.AbortRequest(w, handlerContext, err, handler.ErrUnauthorized)
bruteforce.BadAttempt(r)
return
@@ -199,14 +206,28 @@ func Handler(w http.ResponseWriter, r *http.Request) {
continue
}
+ // choose best caption for header
+ columnNames[i] = getCaption(columns[i].Captions, "columnTitle", languageCode)
+ if columnNames[i] != "" {
+ continue
+ }
+
+ // fallback to attribute title
atr, exists := cache.AttributeIdMap[expr.AttributeId.Bytes]
if !exists {
- handler.AbortRequest(w, handlerContext, errors.New("unknown attribute"), handler.ErrGeneral)
+ handler.AbortRequest(w, handlerContext, handler.ErrSchemaUnknownAttribute(expr.AttributeId.Bytes), handler.ErrGeneral)
return
}
+
+ columnNames[i] = getCaption(atr.Captions, "attributeTitle", languageCode)
+ if columnNames[i] != "" {
+ continue
+ }
+
+ // fallback to attribute + relation name
rel, exists := cache.RelationIdMap[atr.RelationId]
if !exists {
- handler.AbortRequest(w, handlerContext, errors.New("unknown relation"), handler.ErrGeneral)
+ handler.AbortRequest(w, handlerContext, handler.ErrSchemaUnknownRelation(atr.RelationId), handler.ErrGeneral)
return
}
columnNames[i] = rel.Name + "." + atr.Name
@@ -218,13 +239,9 @@ func Handler(w http.ResponseWriter, r *http.Request) {
}
// configure and execute GET data request
+ get.Limit = 0
get.Offset = 0
- if err != nil {
- handler.AbortRequest(w, handlerContext, err, handler.ErrGeneral)
- return
- }
- get.Limit = 10000 // at most 10000 lines per request
- if totalLimit != 0 && totalLimit < get.Limit {
+ if totalLimit != 0 {
get.Limit = totalLimit
}
@@ -249,7 +266,7 @@ func Handler(w http.ResponseWriter, r *http.Request) {
}
for {
- total, err := dataToCsv(writer, get, locUser, boolTrue, boolFalse,
+ total, err := dataToCsv(ctx, writer, get, locUser, boolTrue, boolFalse,
dateFormat, columnAttributeContentUse, loginId)
if err != nil {
@@ -265,6 +282,10 @@ func Handler(w http.ResponseWriter, r *http.Request) {
}
writer.Flush()
+ if err := writer.Error(); err != nil {
+ handler.AbortRequest(w, handlerContext, err, handler.ErrGeneral)
+ return
+ }
if err := file.Close(); err != nil {
handler.AbortRequest(w, handlerContext, err, handler.ErrGeneral)
return
@@ -275,14 +296,8 @@ func Handler(w http.ResponseWriter, r *http.Request) {
os.Remove(filePath)
}
-func dataToCsv(writer *csv.Writer, get types.DataGet, locUser *time.Location,
- boolTrue string, boolFalse string, dateFormat string,
- columnAttributeContentUse []string, loginId int64) (int, error) {
-
- ctx, ctxCancel := context.WithTimeout(context.Background(),
- time.Duration(int64(config.GetUint64("dbTimeoutCsv")))*time.Second)
-
- defer ctxCancel()
+func dataToCsv(ctx context.Context, writer *csv.Writer, get types.DataGet, locUser *time.Location, boolTrue string,
+ boolFalse string, dateFormat string, columnAttributeContentUse []string, loginId int64) (int, error) {
tx, err := db.Pool.Begin(ctx)
if err != nil {
@@ -290,6 +305,10 @@ func dataToCsv(writer *csv.Writer, get types.DataGet, locUser *time.Location,
}
defer tx.Rollback(ctx)
+ if err := db.SetSessionConfig_tx(ctx, tx, loginId); err != nil {
+ return 0, err
+ }
+
var query string
rows, total, err := data.Get_tx(ctx, tx, get, loginId, &query)
if err != nil {
@@ -351,6 +370,12 @@ func dataToCsv(writer *csv.Writer, get types.DataGet, locUser *time.Location,
stringValues[pos] = parseIntegerValues(columnAttributeContentUse[pos], int64(v))
case int64:
stringValues[pos] = parseIntegerValues(columnAttributeContentUse[pos], v)
+ case pgtype.Numeric:
+ b, err := json.Marshal(v)
+ if err != nil {
+ return 0, err
+ }
+ stringValues[pos] = string(b)
default:
stringValues[pos] = fmt.Sprintf("%v", value)
}
@@ -362,3 +387,15 @@ func dataToCsv(writer *csv.Writer, get types.DataGet, locUser *time.Location,
}
return total, nil
}
+
+func getCaption(captionMap map[string]map[string]string, contentName string, languageCode string) string {
+ content, exists := captionMap[contentName]
+ if !exists {
+ return ""
+ }
+ value, exists := content[languageCode]
+ if !exists {
+ return ""
+ }
+ return value
+}
diff --git a/handler/csv_upload/csv_upload.go b/handler/csv_upload/csv_upload.go
index 647784d5..076a47ec 100644
--- a/handler/csv_upload/csv_upload.go
+++ b/handler/csv_upload/csv_upload.go
@@ -19,7 +19,6 @@ import (
"r3/login/login_auth"
"r3/tools"
"r3/types"
- "regexp"
"strconv"
"time"
"unicode/utf8"
@@ -28,43 +27,7 @@ import (
"github.com/jackc/pgx/v5"
)
-var (
- expectedErrorRx []*regexp.Regexp
- handlerContext = "csv_upload"
-)
-
-func init() {
- var regex *regexp.Regexp
-
- // CSV wrong number of fields
- regex, _ = regexp.Compile(`wrong number of fields`)
- expectedErrorRx = append(expectedErrorRx, regex)
-
- // number parse error
- regex, _ = regexp.Compile(`failed to parse number`)
- expectedErrorRx = append(expectedErrorRx, regex)
-
- // date parse error
- regex, _ = regexp.Compile(`failed to parse date`)
- expectedErrorRx = append(expectedErrorRx, regex)
-
- // database, not null violation
- regex, _ = regexp.Compile(`^ERROR\: null value in column`)
- expectedErrorRx = append(expectedErrorRx, regex)
-
- // database, invalid syntax for type
- regex, _ = regexp.Compile(`^ERROR\: invalid input syntax for type`)
- expectedErrorRx = append(expectedErrorRx, regex)
-}
-
-func isExpectedError(err error) bool {
- for _, regex := range expectedErrorRx {
- if regex.MatchString(err.Error()) {
- return true
- }
- }
- return false
-}
+var handlerContext = "csv_upload"
func Handler(w http.ResponseWriter, r *http.Request) {
@@ -139,11 +102,16 @@ func Handler(w http.ResponseWriter, r *http.Request) {
continue
}
+ ctx, ctxCanc := context.WithTimeout(context.Background(),
+ time.Duration(int64(config.GetUint64("dbTimeoutCsv")))*time.Second)
+
+ defer ctxCanc()
+
// check token
var loginId int64
var admin bool
var noAuth bool
- if _, err := login_auth.Token(token, &loginId, &admin, &noAuth); err != nil {
+ if _, _, err := login_auth.Token(ctx, token, &loginId, &admin, &noAuth); err != nil {
handler.AbortRequest(w, handlerContext, err, handler.ErrUnauthorized)
bruteforce.BadAttempt(r)
return
@@ -174,7 +142,7 @@ func Handler(w http.ResponseWriter, r *http.Request) {
}
// read file
- res.Count, err = importFromCsv(filePath, loginId, boolTrue, dateFormat,
+ res.Count, err = importFromCsv(ctx, filePath, loginId, boolTrue, dateFormat,
timezone, commaChar, ignoreHeader, columns, joins, lookups)
if err != nil {
@@ -197,10 +165,9 @@ func Handler(w http.ResponseWriter, r *http.Request) {
// import all lines from CSV, optionally skipping a header line
// returns to which line it got
-func importFromCsv(filePath string, loginId int64, boolTrue string,
- dateFormat string, timezone string, commaChar string, ignoreHeader bool,
- columns []types.Column, joins []types.QueryJoin,
- lookups []types.QueryLookup) (int, error) {
+func importFromCsv(ctx context.Context, filePath string, loginId int64, boolTrue string,
+ dateFormat string, timezone string, commaChar string, ignoreHeader bool, columns []types.Column,
+ joins []types.QueryJoin, lookups []types.QueryLookup) (int, error) {
log.Info("csv", fmt.Sprintf("starts import from file '%s' via upload", filePath))
@@ -210,17 +177,16 @@ func importFromCsv(filePath string, loginId int64, boolTrue string,
}
defer file.Close()
- ctx, ctxCancel := context.WithTimeout(context.Background(),
- time.Duration(int64(config.GetUint64("dbTimeoutCsv")))*time.Second)
-
- defer ctxCancel()
-
tx, err := db.Pool.Begin(ctx)
if err != nil {
return 0, err
}
defer tx.Rollback(ctx)
+ if err := db.SetSessionConfig_tx(ctx, tx, loginId); err != nil {
+ return 0, err
+ }
+
// parse CSV file
reader := csv.NewReader(file)
reader.Comma, _ = utf8.DecodeRuneInString(commaChar)
@@ -288,10 +254,10 @@ func importLine_tx(ctx context.Context, tx pgx.Tx, loginId int64,
atr, exists := cache.AttributeIdMap[column.AttributeId]
if !exists {
- return handler.CreateErrCode("APP", handler.ErrCodeAppUnknownAttribute)
+ return handler.CreateErrCode(handler.ErrContextApp, handler.ErrCodeAppUnknownAttribute)
}
if atr.Encrypted {
- return handler.CreateErrCode("CSV", handler.ErrCodeCsvEncryptedAttribute)
+ return handler.CreateErrCode(handler.ErrContextCsv, handler.ErrCodeCsvEncryptedAttribute)
}
if valuesString[i] == "" {
@@ -331,9 +297,10 @@ func importLine_tx(ctx context.Context, tx pgx.Tx, loginId int64,
t, err := time.ParseInLocation(format, valuesString[i], loc)
if err != nil {
- return handler.CreateErrCodeWithArgs("CSV",
- handler.ErrCodeCsvParseDateTime,
- map[string]string{"VALUE": valuesString[i], "EXPECT": format})
+ return handler.CreateErrCodeWithData(handler.ErrContextCsv, handler.ErrCodeCsvParseDateTime, struct {
+ Expect string `json:"expect"`
+ Value string `json:"value"`
+ }{format, valuesString[i]})
}
valuesIn[i] = t.Unix()
@@ -344,27 +311,27 @@ func importLine_tx(ctx context.Context, tx pgx.Tx, loginId int64,
fmt.Sprintf("1970-01-01 %s UTC", valuesString[i]))
if err != nil {
- return handler.CreateErrCodeWithArgs("CSV",
- handler.ErrCodeCsvParseDateTime,
- map[string]string{"VALUE": valuesString[i], "EXPECT": "15:04:05"})
+ return handler.CreateErrCodeWithData(handler.ErrContextCsv, handler.ErrCodeCsvParseDateTime, struct {
+ Expect string `json:"expect"`
+ Value string `json:"value"`
+ }{"15:04:05", valuesString[i]})
}
valuesIn[i] = t.Unix()
default:
valuesIn[i], err = strconv.ParseInt(valuesString[i], 10, 64)
if err != nil {
- return handler.CreateErrCodeWithArgs("CSV",
- handler.ErrCodeCsvParseInt,
- map[string]string{"VALUE": valuesString[i]})
+ return handler.CreateErrCodeWithData(handler.ErrContextCsv, handler.ErrCodeCsvParseInt, struct {
+ Value string `json:"value"`
+ }{valuesString[i]})
}
}
case "real", "double precision":
valuesIn[i], err = strconv.ParseFloat(valuesString[i], 64)
if err != nil {
- return handler.CreateErrCodeWithArgs("CSV",
- handler.ErrCodeCsvParseFloat,
- map[string]string{"VALUE": valuesString[i]})
-
+ return handler.CreateErrCodeWithData(handler.ErrContextCsv, handler.ErrCodeCsvParseFloat, struct {
+ Value string `json:"value"`
+ }{valuesString[i]})
}
// numeric must be handled as text as conversion to float is not 1:1
@@ -375,9 +342,9 @@ func importLine_tx(ctx context.Context, tx pgx.Tx, loginId int64,
valuesIn[i] = valuesString[i] == boolTrue
case "default":
- return handler.CreateErrCodeWithArgs("CSV",
- handler.ErrCodeCsvBadAttributeType,
- map[string]string{"TYPE": atr.Content})
+ return handler.CreateErrCodeWithData(handler.ErrContextCsv, handler.ErrCodeCsvBadAttributeType, struct {
+ Value string `json:"value"`
+ }{atr.Content})
}
}
diff --git a/handler/data_access/data_access.go b/handler/data_access/data_access.go
index 700681a3..f456a060 100644
--- a/handler/data_access/data_access.go
+++ b/handler/data_access/data_access.go
@@ -13,7 +13,8 @@ import (
"r3/log"
"r3/login/login_auth"
"r3/request"
- "r3/tools"
+ "r3/types"
+ "slices"
"time"
)
@@ -51,29 +52,29 @@ func Handler(w http.ResponseWriter, r *http.Request) {
return
}
- if !tools.StringInSlice(req.Action, allowedActions) {
+ if !slices.Contains(allowedActions, req.Action) {
handler.AbortRequest(w, handlerContext, errors.New("invalid action"),
"invalid action, allowed: del, get, set")
return
}
+ ctx, ctxCanc := context.WithTimeout(context.Background(),
+ time.Duration(int64(config.GetUint64("dbTimeoutDataRest")))*time.Second)
+
+ defer ctxCanc()
+
// authenticate requestor
var loginId int64
var isAdmin bool
var noAuth bool
- if _, err := login_auth.Token(req.Token, &loginId, &isAdmin, &noAuth); err != nil {
+ if _, _, err := login_auth.Token(ctx, req.Token, &loginId, &isAdmin, &noAuth); err != nil {
handler.AbortRequest(w, handlerContext, err, handler.ErrAuthFailed)
bruteforce.BadAttempt(r)
return
}
// execute request
- ctx, ctxCancel := context.WithTimeout(context.Background(),
- time.Duration(int64(config.GetUint64("dbTimeoutDataRest")))*time.Second)
-
- defer ctxCancel()
-
tx, err := db.Pool.Begin(ctx)
if err != nil {
handler.AbortRequest(w, handlerContext, err, handler.ErrGeneral)
@@ -83,7 +84,9 @@ func Handler(w http.ResponseWriter, r *http.Request) {
log.Info("server", fmt.Sprintf("DIRECT ACCESS, %s data, payload: %s", req.Action, req.Request))
- res, err := request.Exec_tx(ctx, tx, loginId, isAdmin, noAuth, "data", req.Action, req.Request)
+ res, err := request.Exec_tx(ctx, tx, "", loginId, isAdmin,
+ types.WebsocketClientDeviceBrowser, noAuth, "data", req.Action, req.Request)
+
if err != nil {
handler.AbortRequest(w, handlerContext, err, handler.ErrGeneral)
return
diff --git a/handler/data_auth/data_auth.go b/handler/data_auth/data_auth.go
index c5673790..ce335038 100644
--- a/handler/data_auth/data_auth.go
+++ b/handler/data_auth/data_auth.go
@@ -1,18 +1,21 @@
package data_auth
import (
+ "context"
"encoding/json"
"errors"
"fmt"
"net/http"
"r3/bruteforce"
+ "r3/config"
"r3/handler"
"r3/login/login_auth"
+ "time"
"github.com/jackc/pgx/v5/pgtype"
)
-var context = "data_auth"
+var logContext = "data_auth"
func Handler(w http.ResponseWriter, r *http.Request) {
@@ -24,7 +27,7 @@ func Handler(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
if r.Method != "POST" {
- handler.AbortRequest(w, context, errors.New("invalid HTTP method"),
+ handler.AbortRequest(w, logContext, errors.New("invalid HTTP method"),
"invalid HTTP method, allowed: POST")
return
@@ -36,10 +39,15 @@ func Handler(w http.ResponseWriter, r *http.Request) {
Password string `json:"password"`
}
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
- handler.AbortRequest(w, context, err, "request body malformed")
+ handler.AbortRequest(w, logContext, err, "request body malformed")
return
}
+ ctx, ctxCanc := context.WithTimeout(context.Background(),
+ time.Duration(int64(config.GetUint64("dbTimeoutDataWs")))*time.Second)
+
+ defer ctxCanc()
+
// authenticate requestor
var loginId int64
var isAdmin bool
@@ -47,11 +55,11 @@ func Handler(w http.ResponseWriter, r *http.Request) {
var mfaTokenId = pgtype.Int4{}
var mfaTokenPin = pgtype.Text{}
- token, _, _, err := login_auth.User(req.Username, req.Password,
+ _, token, _, _, err := login_auth.User(ctx, req.Username, req.Password,
mfaTokenId, mfaTokenPin, &loginId, &isAdmin, &noAuth)
if err != nil {
- handler.AbortRequest(w, context, err, handler.ErrAuthFailed)
+ handler.AbortRequest(w, logContext, err, handler.ErrAuthFailed)
bruteforce.BadAttempt(r)
return
}
diff --git a/handler/data_download/data_download.go b/handler/data_download/data_download.go
index 8c516291..c6653c90 100644
--- a/handler/data_download/data_download.go
+++ b/handler/data_download/data_download.go
@@ -1,17 +1,20 @@
package data_download
import (
+ "context"
"mime"
"net/http"
"path"
"path/filepath"
"r3/bruteforce"
+ "r3/config"
"r3/data"
"r3/handler"
"r3/login/login_auth"
+ "time"
)
-var context = "data_download"
+var logContext = "data_download"
func Handler(w http.ResponseWriter, r *http.Request) {
@@ -23,16 +26,21 @@ func Handler(w http.ResponseWriter, r *http.Request) {
// get authentication token
token, err := handler.ReadGetterFromUrl(r, "token")
if err != nil {
- handler.AbortRequest(w, context, err, handler.ErrGeneral)
+ handler.AbortRequest(w, logContext, err, handler.ErrGeneral)
return
}
+ ctx, ctxCanc := context.WithTimeout(context.Background(),
+ time.Duration(int64(config.GetUint64("dbTimeoutDataWs")))*time.Second)
+
+ defer ctxCanc()
+
// check token, any login is generally allowed to attempt a download
var loginId int64
var admin bool
var noAuth bool
- if _, err := login_auth.Token(token, &loginId, &admin, &noAuth); err != nil {
- handler.AbortRequest(w, context, err, handler.ErrAuthFailed)
+ if _, _, err := login_auth.Token(ctx, token, &loginId, &admin, &noAuth); err != nil {
+ handler.AbortRequest(w, logContext, err, handler.ErrAuthFailed)
bruteforce.BadAttempt(r)
return
}
@@ -40,18 +48,18 @@ func Handler(w http.ResponseWriter, r *http.Request) {
// parse other getters
attributeId, err := handler.ReadUuidGetterFromUrl(r, "attribute_id")
if err != nil {
- handler.AbortRequest(w, context, err, handler.ErrGeneral)
+ handler.AbortRequest(w, logContext, err, handler.ErrGeneral)
return
}
fileId, err := handler.ReadUuidGetterFromUrl(r, "file_id")
if err != nil {
- handler.AbortRequest(w, context, err, handler.ErrGeneral)
+ handler.AbortRequest(w, logContext, err, handler.ErrGeneral)
return
}
// check file access privilege
if err := data.MayAccessFile(loginId, attributeId); err != nil {
- handler.AbortRequest(w, context, err, handler.ErrUnauthorized)
+ handler.AbortRequest(w, logContext, err, handler.ErrUnauthorized)
return
}
@@ -64,7 +72,7 @@ func Handler(w http.ResponseWriter, r *http.Request) {
if version == -1 {
version, err = data.FileGetLatestVersion(fileId)
if err != nil {
- handler.AbortRequest(w, context, err, handler.ErrGeneral)
+ handler.AbortRequest(w, logContext, err, handler.ErrGeneral)
return
}
}
diff --git a/handler/data_download_thumb/data_download_thumb.go b/handler/data_download_thumb/data_download_thumb.go
index 82f56a15..312e1c4d 100644
--- a/handler/data_download_thumb/data_download_thumb.go
+++ b/handler/data_download_thumb/data_download_thumb.go
@@ -1,19 +1,21 @@
package data_download_thumb
import (
- "fmt"
+ "context"
"net/http"
"os"
"path/filepath"
"r3/bruteforce"
+ "r3/config"
"r3/data"
+ "r3/data/data_image"
"r3/handler"
- "r3/image"
"r3/login/login_auth"
"strings"
+ "time"
)
-var context = "data_download_thumb"
+var logContext = "data_download_thumb"
func Handler(w http.ResponseWriter, r *http.Request) {
@@ -23,7 +25,7 @@ func Handler(w http.ResponseWriter, r *http.Request) {
}
// check if thumbnail processing is available
- if !image.GetCanProcess() {
+ if !data_image.GetCanProcess() {
w.Write(handler.NoImage)
return
}
@@ -31,16 +33,21 @@ func Handler(w http.ResponseWriter, r *http.Request) {
// get authentication token
token, err := handler.ReadGetterFromUrl(r, "token")
if err != nil {
- handler.AbortRequest(w, context, err, handler.ErrGeneral)
+ handler.AbortRequest(w, logContext, err, handler.ErrGeneral)
return
}
+ ctx, ctxCanc := context.WithTimeout(context.Background(),
+ time.Duration(int64(config.GetUint64("dbTimeoutDataWs")))*time.Second)
+
+ defer ctxCanc()
+
// check token, any login is generally allowed to attempt a download
var loginId int64
var admin bool
var noAuth bool
- if _, err := login_auth.Token(token, &loginId, &admin, &noAuth); err != nil {
- handler.AbortRequest(w, context, err, handler.ErrAuthFailed)
+ if _, _, err := login_auth.Token(ctx, token, &loginId, &admin, &noAuth); err != nil {
+ handler.AbortRequest(w, logContext, err, handler.ErrAuthFailed)
bruteforce.BadAttempt(r)
return
}
@@ -48,19 +55,18 @@ func Handler(w http.ResponseWriter, r *http.Request) {
// parse other getters
attributeId, err := handler.ReadUuidGetterFromUrl(r, "attribute_id")
if err != nil {
- handler.AbortRequest(w, context, err, handler.ErrGeneral)
+ handler.AbortRequest(w, logContext, err, handler.ErrGeneral)
return
}
fileId, err := handler.ReadUuidGetterFromUrl(r, "file_id")
if err != nil {
- handler.AbortRequest(w, context, err, handler.ErrGeneral)
+ handler.AbortRequest(w, logContext, err, handler.ErrGeneral)
return
}
// check file access privilege
if err := data.MayAccessFile(loginId, attributeId); err != nil {
- fmt.Println("bad access")
- handler.AbortRequest(w, context, err, handler.ErrUnauthorized)
+ handler.AbortRequest(w, logContext, err, handler.ErrUnauthorized)
return
}
@@ -69,7 +75,7 @@ func Handler(w http.ResponseWriter, r *http.Request) {
_, err = os.Stat(filePath)
if err != nil && !os.IsNotExist(err) {
- handler.AbortRequest(w, context, err, handler.ErrGeneral)
+ handler.AbortRequest(w, logContext, err, handler.ErrGeneral)
return
}
@@ -80,12 +86,12 @@ func Handler(w http.ResponseWriter, r *http.Request) {
version, err := data.FileGetLatestVersion(fileId)
if err != nil {
- handler.AbortRequest(w, context, err, handler.ErrGeneral)
+ handler.AbortRequest(w, logContext, err, handler.ErrGeneral)
return
}
filePathSrc := data.GetFilePathVersion(fileId, version)
- if err := image.CreateThumbnail(fileId, fileExt, filePathSrc, filePath, true); err != nil {
+ if err := data_image.CreateThumbnail(fileId, fileExt, filePathSrc, filePath, true); err != nil {
w.Write(handler.NoImage)
return
}
diff --git a/handler/data_upload/data_upload.go b/handler/data_upload/data_upload.go
index d5fe53c5..8623c0ae 100644
--- a/handler/data_upload/data_upload.go
+++ b/handler/data_upload/data_upload.go
@@ -2,18 +2,21 @@ package data_upload
import (
"bytes"
+ "context"
"encoding/json"
"io"
"net/http"
"r3/bruteforce"
+ "r3/config"
"r3/data"
"r3/handler"
"r3/login/login_auth"
+ "time"
"github.com/gofrs/uuid"
)
-var context = "data_upload"
+var logContext = "data_upload"
func Handler(w http.ResponseWriter, r *http.Request) {
@@ -26,7 +29,7 @@ func Handler(w http.ResponseWriter, r *http.Request) {
reader, err := r.MultipartReader()
if err != nil {
- handler.AbortRequest(w, context, err, handler.ErrGeneral)
+ handler.AbortRequest(w, logContext, err, handler.ErrGeneral)
return
}
@@ -63,12 +66,17 @@ func Handler(w http.ResponseWriter, r *http.Request) {
continue
}
+ ctx, ctxCanc := context.WithTimeout(context.Background(),
+ time.Duration(int64(config.GetUint64("dbTimeoutDataWs")))*time.Second)
+
+ defer ctxCanc()
+
// check token, any login is allowed to attempt upload
var loginId int64
var admin bool
var noAuth bool
- if _, err := login_auth.Token(token, &loginId, &admin, &noAuth); err != nil {
- handler.AbortRequest(w, context, err, handler.ErrAuthFailed)
+ if _, _, err := login_auth.Token(ctx, token, &loginId, &admin, &noAuth); err != nil {
+ handler.AbortRequest(w, logContext, err, handler.ErrAuthFailed)
bruteforce.BadAttempt(r)
return
}
@@ -76,14 +84,14 @@ func Handler(w http.ResponseWriter, r *http.Request) {
// parse attribute ID
attributeId, err := uuid.FromString(attributeIdString)
if err != nil {
- handler.AbortRequest(w, context, err, handler.ErrGeneral)
+ handler.AbortRequest(w, logContext, err, handler.ErrGeneral)
return
}
// parse file ID
fileId, err := uuid.FromString(fileIdString)
if err != nil {
- handler.AbortRequest(w, context, err, handler.ErrGeneral)
+ handler.AbortRequest(w, logContext, err, handler.ErrGeneral)
return
}
@@ -92,13 +100,13 @@ func Handler(w http.ResponseWriter, r *http.Request) {
if isNewFile {
fileId, err = uuid.NewV4()
if err != nil {
- handler.AbortRequest(w, context, err, handler.ErrGeneral)
+ handler.AbortRequest(w, logContext, err, handler.ErrGeneral)
return
}
}
- if err := data.SetFile(loginId, attributeId, fileId, part, isNewFile); err != nil {
- handler.AbortRequest(w, context, err, handler.ErrGeneral)
+ if err := data.SetFile(ctx, loginId, attributeId, fileId, part, isNewFile); err != nil {
+ handler.AbortRequest(w, logContext, err, handler.ErrGeneral)
return
}
response.Id = fileId
@@ -106,7 +114,7 @@ func Handler(w http.ResponseWriter, r *http.Request) {
responseJson, err := json.Marshal(response)
if err != nil {
- handler.AbortRequest(w, context, err, handler.ErrGeneral)
+ handler.AbortRequest(w, logContext, err, handler.ErrGeneral)
return
}
w.Write(responseJson)
diff --git a/handler/handler_error.go b/handler/handler_error.go
index 10ec19ad..5fd76703 100644
--- a/handler/handler_error.go
+++ b/handler/handler_error.go
@@ -1,17 +1,21 @@
package handler
import (
+ "encoding/json"
"errors"
"fmt"
- "r3/tools"
"regexp"
+ "slices"
+ "strings"
"github.com/gofrs/uuid"
+ "github.com/jackc/pgx/v5/pgconn"
)
type errExpected struct {
- convertFn func(err error) error // function that translates known error message to error code
- matchRx *regexp.Regexp // regex that matches the expected error message
+ context string
+ matchRx *regexp.Regexp // regex that matches the expected error message
+ number int
}
const (
@@ -19,8 +23,15 @@ const (
ErrAuthFailed = "authentication failed"
ErrBruteforceBlock = "blocked assumed bruteforce attempt"
ErrGeneral = "general error"
- ErrWsClientChanFull = "client channel is full, dropping response"
ErrUnauthorized = "unauthorized"
+ ErrWsClientChanFull = "client channel is full, dropping response"
+
+ // error contexts
+ ErrContextApp = "APP"
+ ErrContextCsv = "CSV"
+ ErrContextDbs = "DBS"
+ ErrContextLic = "LIC"
+ ErrContextSec = "SEC"
// error codes
ErrCodeAppUnknown int = 1
@@ -43,8 +54,11 @@ const (
ErrCodeDbsConstraintUniqueLogin int = 3
ErrCodeDbsConstraintFk int = 4
ErrCodeDbsConstraintNotNull int = 5
- ErrCodeDbsIndexFailUnique int = 6
+ ErrCodeDbsIndexFailUnique int = 6 // special: is applied on frontend only, if ErrCodeDbsConstraintUnique is used but ID is unknown
ErrCodeDbsInvalidTypeSyntax int = 7
+ ErrCodeDbsChangedCachePlan int = 8
+ ErrCodeLicValidityExpired int = 1
+ ErrCodeLicLoginsReached int = 2
ErrCodeSecUnauthorized int = 1
ErrCodeSecDataKeysNotAvailable int = 5
ErrCodeSecNoPublicKeys int = 6
@@ -52,130 +66,60 @@ const (
var (
// errors
- errContexts = []string{"APP", "CSV", "DBS", "SEC"}
+ errContexts = []string{ErrContextApp, ErrContextCsv, ErrContextDbs, ErrContextLic, ErrContextSec}
+ errCodeDbsCache = regexp.MustCompile(fmt.Sprintf("^{ERR_DBS_%03d}", ErrCodeDbsChangedCachePlan))
+ errCodeLicRx = regexp.MustCompile(`^{ERR_LIC_(\d{3})}`)
errCodeRx = regexp.MustCompile(`^{ERR_([A-Z]{3})_(\d{3})}`)
errExpectedList = []errExpected{
// security/access
- errExpected{ // unauthorized
- convertFn: func(err error) error { return CreateErrCode("SEC", ErrCodeSecUnauthorized) },
- matchRx: regexp.MustCompile(fmt.Sprintf(`^%s$`, ErrUnauthorized)),
+ { // unauthorized
+ context: ErrContextSec,
+ matchRx: regexp.MustCompile(fmt.Sprintf(`^%s$`, ErrUnauthorized)),
+ number: ErrCodeSecUnauthorized,
},
// application
- errExpected{ // context deadline reached
- convertFn: func(err error) error { return CreateErrCode("APP", ErrCodeAppContextExceeded) },
- matchRx: regexp.MustCompile(`^timeout\: context deadline exceeded$`),
+ { // context deadline reached
+ context: ErrContextApp,
+ matchRx: regexp.MustCompile(`^timeout\: context deadline exceeded$`),
+ number: ErrCodeAppContextExceeded,
},
- errExpected{ // context canceled
- convertFn: func(err error) error { return CreateErrCode("APP", ErrCodeAppContextCanceled) },
- matchRx: regexp.MustCompile(`^timeout\: context canceled$`),
+ { // context canceled
+ context: ErrContextApp,
+ matchRx: regexp.MustCompile(`^timeout\: context canceled$`),
+ number: ErrCodeAppContextCanceled,
},
// CSV handling
- errExpected{ // wrong number of fields
- convertFn: func(err error) error {
- matches := regexp.MustCompile(`^record on line (\d+)\: wrong number of fields`).FindStringSubmatch(err.Error())
- if len(matches) != 2 {
- return CreateErrCode("CSV", ErrCodeCsvWrongFieldNumber)
- }
- return CreateErrCodeWithArgs("CSV", ErrCodeCsvWrongFieldNumber,
- map[string]string{"VALUE": matches[1]})
- },
+ { // wrong number of fields (error originates from encoding/csv package)
+ context: ErrContextCsv,
matchRx: regexp.MustCompile(`^record on line \d+\: wrong number of fields`),
- },
-
- // database messages (postgres)
- errExpected{ // custom error message from application, used in instance.abort_show_message()
- convertFn: func(err error) error {
- matches := regexp.MustCompile(`^ERROR\: R3_MSG\: (.*)`).FindStringSubmatch(err.Error())
- if len(matches) != 2 {
- return CreateErrCode("DBS", ErrCodeDbsFunctionMessage)
- }
- return CreateErrCodeWithArgs("DBS", ErrCodeDbsFunctionMessage,
- map[string]string{"FNC_MSG": matches[1]})
- },
- matchRx: regexp.MustCompile(`^ERROR\: R3_MSG\: `),
- },
- errExpected{ // unique constraint violation, custom unique index
- convertFn: func(err error) error {
- matches := regexp.MustCompile(`^ERROR\: duplicate key value violates unique constraint \"ind_(.{36})\"`).FindStringSubmatch(err.Error())
- if len(matches) != 2 {
- return CreateErrCode("DBS", ErrCodeDbsConstraintUnique)
- }
-
- return CreateErrCodeWithArgs("DBS", ErrCodeDbsConstraintUnique,
- map[string]string{"IND_ID": matches[1]})
- },
- matchRx: regexp.MustCompile(`^ERROR\: duplicate key value violates unique constraint \"ind_.{36}\"`),
- },
- errExpected{ // foreign key constraint violation
- convertFn: func(err error) error {
- matches := regexp.MustCompile(`^ERROR\: .+ on table \".+\" violates foreign key constraint \"fk_(.{36})\"`).FindStringSubmatch(err.Error())
- if len(matches) != 2 {
- return CreateErrCode("DBS", ErrCodeDbsConstraintFk)
- }
- return CreateErrCodeWithArgs("DBS", ErrCodeDbsConstraintFk,
- map[string]string{"ATR_ID": matches[1]})
- },
- matchRx: regexp.MustCompile(`^ERROR\: .+ on table \".+\" violates foreign key constraint \"fk_.{36}\"`),
- },
- errExpected{ // NOT NULL constraint violation
- convertFn: func(err error) error {
- matches := regexp.MustCompile(`^ERROR\: null value in column \"(.+)\" violates not-null constraint \(SQLSTATE 23502\)`).FindStringSubmatch(err.Error())
- if len(matches) != 2 {
- return CreateErrCode("DBS", ErrCodeDbsConstraintNotNull)
- }
- return CreateErrCodeWithArgs("DBS", ErrCodeDbsConstraintNotNull,
- map[string]string{"COLUMN_NAME": matches[1]})
- },
- matchRx: regexp.MustCompile(`^ERROR\: null value in column \".+\" violates not-null constraint \(SQLSTATE 23502\)`),
- },
- errExpected{ // invalid syntax for type
- convertFn: func(err error) error {
- matches := regexp.MustCompile(`^ERROR\: invalid input syntax for type \w+\: \"(.+)\"`).FindStringSubmatch(err.Error())
- if len(matches) != 2 {
- return CreateErrCode("DBS", ErrCodeDbsInvalidTypeSyntax)
- }
- return CreateErrCodeWithArgs("DBS", ErrCodeDbsInvalidTypeSyntax,
- map[string]string{"VALUE": matches[1]})
- },
- matchRx: regexp.MustCompile(`^ERROR\: invalid input syntax for type \w+\: \".+\"`),
- },
- errExpected{ // failed to create unique index due to existing non-unique values
- convertFn: func(err error) error { return CreateErrCode("DBS", ErrCodeDbsIndexFailUnique) },
- matchRx: regexp.MustCompile(`^ERROR\: could not create unique index \"ind_.{36}\" \(SQLSTATE 23505\)`),
- },
- errExpected{ // duplicate key violation: login name
- convertFn: func(err error) error { return CreateErrCode("DBS", ErrCodeDbsConstraintUniqueLogin) },
- matchRx: regexp.MustCompile(`^ERROR\: duplicate key value violates unique constraint \"login_name_key\" \(SQLSTATE 23505\)`),
+ number: ErrCodeCsvWrongFieldNumber,
},
}
)
// creates standardized error code, to be interpreted and translated on the frontend
-// context is the general error context: APP (application), DBS (database system), SEC (security/access)
+// context is the general error context: APP (application), DBS (database system), SEC (security/access), ...
// number is the unique error code, used to convert to a translated error message
-// message is the original error message, which is also appended in case error code is not translated
-// example error code: {ERR_DBS_069} My error message
+// example error code: {ERR_DBS_069}
func CreateErrCode(context string, number int) error {
- if !tools.StringInSlice(context, errContexts) {
+ if !slices.Contains(errContexts, context) {
return errors.New("{INVALID_ERROR_CONTEXT}")
}
return fmt.Errorf("{ERR_%s_%03d}", context, number)
}
-// creates standardized error code with arguments to send error related data for error interpretation
-// example error code: {ERR_DBS_069} [name2:value2] [name1:value1] My error message
-func CreateErrCodeWithArgs(context string, number int, argMapValues map[string]string) error {
- if !tools.StringInSlice(context, errContexts) {
- return errors.New("{INVALID_ERROR_CONTEXT}")
- }
- var args string
- for arg, value := range argMapValues {
- args = fmt.Sprintf("%s[%s:%s]", args, arg, value)
+// as CreateErrCode, but appends JSON encoded data to the string
+func CreateErrCodeWithData(context string, number int, data interface{}) error {
+ code := CreateErrCode(context, number)
+
+ j, err := json.Marshal(data)
+ if err != nil {
+ return code
}
- return fmt.Errorf("{ERR_%s_%03d}%s", context, number, args)
+ return fmt.Errorf("%s%s", code, j)
}
// converts expected errors to error codes to be parsed/translated by requestor
@@ -183,23 +127,94 @@ func CreateErrCodeWithArgs(context string, number int, argMapValues map[string]s
// returns whether the error was identified
func ConvertToErrCode(err error, anonymizeIfUnexpected bool) (error, bool) {
+ var processUnexpectedErr = func(err error) error {
+ if anonymizeIfUnexpected {
+ return CreateErrCode(ErrContextApp, ErrCodeAppUnknown)
+ }
+ return err
+ }
+
// already an error code, return as is
if errCodeRx.MatchString(err.Error()) {
return err, true
}
- // check for match against all expected errors
+ // check for "cached plan must not change result" type error
+ // for some reason this error type is not recognized as PGX error
+ // quick fix until we can figure out why this occurs
+ if strings.Contains(err.Error(), "(SQLSTATE 0A000)") {
+ return CreateErrCode(ErrContextDbs, ErrCodeDbsChangedCachePlan), true
+ }
+
+ // check for PGX error
+ var pgxErr *pgconn.PgError
+ if errors.As(err, &pgxErr) {
+
+ switch pgxErr.Code {
+ case "0A000": // error in prepared statement cache due to changed schema
+ return CreateErrCode(ErrContextDbs, ErrCodeDbsChangedCachePlan), true
+ case "23502": // NOT NULL constraint failure
+ return CreateErrCodeWithData(ErrContextDbs, ErrCodeDbsConstraintNotNull, struct {
+ ModuleName string `json:"moduleName"`
+ RelationName string `json:"relationName"`
+ AttributeName string `json:"attributeName"`
+ }{
+ pgxErr.SchemaName,
+ pgxErr.TableName,
+ pgxErr.ColumnName,
+ }), true
+ case "23503": // foreign key constraint failure
+
+ // foreign key constraint names have this format: "fk_[UUID]"
+ if pgxErr.ConstraintName == "" || pgxErr.ConstraintName[0:3] != "fk_" {
+ return processUnexpectedErr(err), false
+ }
+
+ return CreateErrCodeWithData(ErrContextDbs, ErrCodeDbsConstraintFk, struct {
+ AttributeId string `json:"attributeId"`
+ }{pgxErr.ConstraintName[3:]}), true
+ case "23505": // unique index constraint failure
+
+ // special case: login name index
+ if pgxErr.ConstraintName == "login_name_key" {
+ return CreateErrCode(ErrContextDbs, ErrCodeDbsConstraintUniqueLogin), true
+ }
+
+ // unique index constraint names have this format: "ind_[UUID]"
+ if pgxErr.ConstraintName == "" || pgxErr.ConstraintName[0:4] != "ind_" {
+ return processUnexpectedErr(err), false
+ }
+
+ return CreateErrCodeWithData(ErrContextDbs, ErrCodeDbsConstraintUnique, struct {
+ PgIndexId string `json:"pgIndexId"`
+ }{pgxErr.ConstraintName[4:]}), true
+ case "22P02": // invalid type syntax
+ return CreateErrCode(ErrContextDbs, ErrCodeDbsInvalidTypeSyntax), true
+ case "P0001": // exception raised
+ if pgxErr.Message == "" || pgxErr.Message[0:6] != "R3_MSG" {
+ return processUnexpectedErr(err), false
+ }
+ return CreateErrCodeWithData(ErrContextDbs, ErrCodeDbsFunctionMessage, struct {
+ Message string `json:"message"`
+ }{pgxErr.Message[8:]}), true
+ }
+ }
+
+ // check for match against expected errors
for _, expErr := range errExpectedList {
if expErr.matchRx.MatchString(err.Error()) {
- return expErr.convertFn(err), true
+ return CreateErrCode(expErr.context, expErr.number), true
}
}
+ return processUnexpectedErr(err), false
+}
- // unexpected error
- if anonymizeIfUnexpected {
- return CreateErrCode("APP", ErrCodeAppUnknown), false
- }
- return err, false
+// error code checker
+func CheckForLicenseErrCode(err error) bool {
+ return errCodeLicRx.MatchString(err.Error())
+}
+func CheckForDbsCacheErrCode(err error) bool {
+ return errCodeDbsCache.MatchString(err.Error())
}
// default schema errors
@@ -218,3 +233,15 @@ func ErrSchemaUnknownFunction(id uuid.UUID) error {
func ErrSchemaUnknownPolicyAction(name string) error {
return fmt.Errorf("unknown policy action '%s'", name)
}
+func ErrSchemaUnknownClientEvent(id uuid.UUID) error {
+ return fmt.Errorf("unknown client event '%s'", id)
+}
+func ErrSchemaUnknownPgFunction(id uuid.UUID) error {
+ return fmt.Errorf("unknown backend function '%s'", id)
+}
+func ErrSchemaTriggerPgFunctionCall(id uuid.UUID) error {
+ return fmt.Errorf("backend function '%s' is a trigger function, it cannot be called directly", id)
+}
+func ErrSchemaBadFrontendExecPgFunctionCall(id uuid.UUID) error {
+ return fmt.Errorf("backend function '%s' may not be called from the frontend", id)
+}
diff --git a/handler/icon_upload/icon_upload.go b/handler/icon_upload/icon_upload.go
index 2d180cd7..23a45801 100644
--- a/handler/icon_upload/icon_upload.go
+++ b/handler/icon_upload/icon_upload.go
@@ -2,19 +2,22 @@ package icon_upload
import (
"bytes"
+ "context"
"errors"
"io"
"net/http"
"r3/bruteforce"
+ "r3/config"
"r3/db"
"r3/handler"
"r3/login/login_auth"
"r3/schema/icon"
+ "time"
"github.com/gofrs/uuid"
)
-var context = "icon_upload"
+var logContext = "icon_upload"
func Handler(w http.ResponseWriter, r *http.Request) {
@@ -27,7 +30,7 @@ func Handler(w http.ResponseWriter, r *http.Request) {
reader, err := r.MultipartReader()
if err != nil {
- handler.AbortRequest(w, context, err, handler.ErrGeneral)
+ handler.AbortRequest(w, logContext, err, handler.ErrGeneral)
return
}
@@ -60,60 +63,68 @@ func Handler(w http.ResponseWriter, r *http.Request) {
continue
}
+ ctx, ctxCanc := context.WithTimeout(context.Background(),
+ time.Duration(int64(config.GetUint64("dbTimeoutDataWs")))*time.Second)
+
+ defer ctxCanc()
+
// check token
var loginId int64
var admin bool
var noAuth bool
- if _, err := login_auth.Token(token, &loginId, &admin, &noAuth); err != nil {
- handler.AbortRequest(w, context, err, handler.ErrAuthFailed)
+ if _, _, err := login_auth.Token(ctx, token, &loginId, &admin, &noAuth); err != nil {
+ handler.AbortRequest(w, logContext, err, handler.ErrAuthFailed)
bruteforce.BadAttempt(r)
return
}
if !admin {
- handler.AbortRequest(w, context, err, handler.ErrUnauthorized)
+ handler.AbortRequest(w, logContext, err, handler.ErrUnauthorized)
return
}
// parse module ID
moduleId, err := uuid.FromString(moduleIdString)
if err != nil {
- handler.AbortRequest(w, context, err, handler.ErrGeneral)
+ handler.AbortRequest(w, logContext, err, handler.ErrGeneral)
return
}
// parse icon ID
iconId, err := uuid.FromString(iconIdString)
if err != nil {
- handler.AbortRequest(w, context, err, handler.ErrGeneral)
+ handler.AbortRequest(w, logContext, err, handler.ErrGeneral)
return
}
// insert/update icon
- tx, err := db.Pool.Begin(db.Ctx)
+ tx, err := db.Pool.Begin(ctx)
if err != nil {
- handler.AbortRequest(w, context, err, handler.ErrGeneral)
+ handler.AbortRequest(w, logContext, err, handler.ErrGeneral)
return
}
- defer tx.Rollback(db.Ctx)
+ defer tx.Rollback(ctx)
buf := new(bytes.Buffer)
if _, err := buf.ReadFrom(part); err != nil {
- handler.AbortRequest(w, context, err, handler.ErrGeneral)
+ handler.AbortRequest(w, logContext, err, handler.ErrGeneral)
return
}
// check size
if int(len(buf.Bytes())/1024) > 64 {
- handler.AbortRequest(w, context, errors.New("icon size > 64kb"), handler.ErrGeneral)
+ handler.AbortRequest(w, logContext, errors.New("icon size > 64kb"), handler.ErrGeneral)
return
}
- if err := icon.Set_tx(tx, moduleId, iconId, "", buf.Bytes(), false); err != nil {
- handler.AbortRequest(w, context, err, handler.ErrGeneral)
+ if err := icon.Set_tx(ctx, tx, moduleId, iconId, "", buf.Bytes(), false); err != nil {
+ handler.AbortRequest(w, logContext, err, handler.ErrGeneral)
+ return
+ }
+ if err := tx.Commit(ctx); err != nil {
+ handler.AbortRequest(w, logContext, err, handler.ErrGeneral)
return
}
- tx.Commit(db.Ctx)
}
w.Write([]byte(`{"error": ""}`))
}
diff --git a/handler/ics_download/ics_download.go b/handler/ics_download/ics_download.go
index bd346646..fd742a91 100644
--- a/handler/ics_download/ics_download.go
+++ b/handler/ics_download/ics_download.go
@@ -59,17 +59,35 @@ func Handler(w http.ResponseWriter, r *http.Request) {
return
}
+ ctx, ctxCanc := context.WithTimeout(context.Background(),
+ time.Duration(int64(config.GetUint64("dbTimeoutIcs")))*time.Second)
+
+ defer ctxCanc()
+
// authenticate via fixed token
- var languageCode string
var tokenNotUsed string
- if err := login_auth.TokenFixed(loginId, "ics", tokenFixed, &languageCode, &tokenNotUsed); err != nil {
+ languageCode, err := login_auth.TokenFixed(ctx, loginId, "ics", tokenFixed, &tokenNotUsed)
+ if err != nil {
handler.AbortRequest(w, handlerContext, err, handler.ErrAuthFailed)
bruteforce.BadAttempt(r)
return
}
+ // start DB transaction
+ tx, err := db.Pool.Begin(ctx)
+ if err != nil {
+ handler.AbortRequest(w, handlerContext, err, handler.ErrGeneral)
+ return
+ }
+ defer tx.Rollback(ctx)
+
+ if err := db.SetSessionConfig_tx(ctx, tx, loginId); err != nil {
+ handler.AbortRequest(w, handlerContext, err, handler.ErrGeneral)
+ return
+ }
+
// get calendar field details from cache
- f, err := cache.GetCalendarField(fieldId)
+ f, err := cache.GetCalendarField_tx(ctx, tx, fieldId)
if err != nil {
handler.AbortRequest(w, handlerContext, err, handler.ErrGeneral)
return
@@ -97,7 +115,8 @@ func Handler(w http.ResponseWriter, r *http.Request) {
// apply field filters
// some filters are not compatible with backend requests (field value, open form record ID, ...)
- dataGet.Filters = data_query.ConvertQueryToDataFilter(f.Query.Filters, loginId, languageCode)
+ dataGet.Filters = data_query.ConvertQueryToDataFilter(
+ f.Query.Filters, loginId, languageCode, make(map[string]string))
// define ICS event range, if defined
dateRange0 := f.DateRange0
@@ -117,6 +136,7 @@ func Handler(w http.ResponseWriter, r *http.Request) {
if dateRange0 != 0 {
dataGet.Filters = append(dataGet.Filters, types.DataGetFilter{
Connector: "AND",
+ Index: 0,
Operator: ">=",
Side0: types.DataGetFilterSide{
AttributeId: pgtype.UUID{
@@ -136,6 +156,7 @@ func Handler(w http.ResponseWriter, r *http.Request) {
if dateRange1 != 0 {
dataGet.Filters = append(dataGet.Filters, types.DataGetFilter{
Connector: "AND",
+ Index: 0,
Operator: "<=",
Side0: types.DataGetFilterSide{
AttributeId: pgtype.UUID{
@@ -185,23 +206,11 @@ func Handler(w http.ResponseWriter, r *http.Request) {
continue
}
- dataGet.Expressions = append(dataGet.Expressions,
- data_query.ConvertColumnToExpression(column, loginId, languageCode))
+ dataGet.Expressions = append(dataGet.Expressions, data_query.ConvertColumnToExpression(
+ column, loginId, languageCode, make(map[string]string)))
}
// get data
- ctx, ctxCancel := context.WithTimeout(context.Background(),
- time.Duration(int64(config.GetUint64("dbTimeoutIcs")))*time.Second)
-
- defer ctxCancel()
-
- tx, err := db.Pool.Begin(ctx)
- if err != nil {
- handler.AbortRequest(w, handlerContext, err, handler.ErrGeneral)
- return
- }
- defer tx.Rollback(ctx)
-
var query string
results, _, err := data.Get_tx(ctx, tx, dataGet, loginId, &query)
if err != nil {
@@ -219,7 +228,7 @@ func Handler(w http.ResponseWriter, r *http.Request) {
var modName string
var modNameParent string
- if err := db.Pool.QueryRow(db.Ctx, `
+ if err := db.Pool.QueryRow(ctx, `
SELECT name, COALESCE((
SELECT name
FROM app.module
@@ -277,8 +286,10 @@ func Handler(w http.ResponseWriter, r *http.Request) {
// check for valid date values (start/end)
if len(result.Values) < 2 ||
- fmt.Sprintf("%s", reflect.TypeOf(result.Values[0])) != "int64" ||
- fmt.Sprintf("%s", reflect.TypeOf(result.Values[1])) != "int64" {
+ result.Values[0] == nil ||
+ result.Values[1] == nil ||
+ reflect.TypeOf(result.Values[0]).String() != "int64" ||
+ reflect.TypeOf(result.Values[1]).String() != "int64" {
handler.AbortRequest(w, handlerContext, errors.New("invalid values for date"),
handler.ErrGeneral)
@@ -321,7 +332,7 @@ func Handler(w http.ResponseWriter, r *http.Request) {
}
// deliver ICS
- w.Header().Set("Content-type", "text/calendar")
+ w.Header().Set("Content-Type", "text/calendar")
w.Header().Set("charset", "utf-8")
w.Header().Set("Content-Disposition", "inline")
w.Header().Set("filename", "calendar.ics")
diff --git a/handler/license_upload/license_upload.go b/handler/license_upload/license_upload.go
index e75af7ed..ebb28344 100644
--- a/handler/license_upload/license_upload.go
+++ b/handler/license_upload/license_upload.go
@@ -2,6 +2,7 @@ package license_upload
import (
"bytes"
+ "context"
"errors"
"io"
"net/http"
@@ -11,9 +12,10 @@ import (
"r3/db"
"r3/handler"
"r3/login/login_auth"
+ "time"
)
-var context = "license_upload"
+var logContext = "license_upload"
func Handler(w http.ResponseWriter, r *http.Request) {
@@ -26,7 +28,7 @@ func Handler(w http.ResponseWriter, r *http.Request) {
reader, err := r.MultipartReader()
if err != nil {
- handler.AbortRequest(w, context, err, handler.ErrGeneral)
+ handler.AbortRequest(w, logContext, err, handler.ErrGeneral)
return
}
@@ -47,51 +49,57 @@ func Handler(w http.ResponseWriter, r *http.Request) {
continue
}
+ ctx, ctxCanc := context.WithTimeout(context.Background(),
+ time.Duration(int64(config.GetUint64("dbTimeoutDataWs")))*time.Second)
+
+ defer ctxCanc()
+
// check token
var loginId int64
var admin bool
var noAuth bool
- if _, err := login_auth.Token(token, &loginId, &admin, &noAuth); err != nil {
- handler.AbortRequest(w, context, err, handler.ErrAuthFailed)
+ if _, _, err := login_auth.Token(ctx, token, &loginId, &admin, &noAuth); err != nil {
+ handler.AbortRequest(w, logContext, err, handler.ErrAuthFailed)
bruteforce.BadAttempt(r)
return
}
if !admin {
- handler.AbortRequest(w, context, err, handler.ErrUnauthorized)
+ handler.AbortRequest(w, logContext, err, handler.ErrUnauthorized)
return
}
// read file into buffer
buf := new(bytes.Buffer)
if _, err := buf.ReadFrom(part); err != nil {
- handler.AbortRequest(w, context, err, handler.ErrGeneral)
+ handler.AbortRequest(w, logContext, err, handler.ErrGeneral)
return
}
// check size
if int(len(buf.Bytes())/1024) > 64 {
- handler.AbortRequest(w, context, errors.New("license file size > 64kb"), handler.ErrGeneral)
+ handler.AbortRequest(w, logContext, errors.New("license file size > 64kb"), handler.ErrGeneral)
return
}
// set license
- tx, err := db.Pool.Begin(db.Ctx)
+ tx, err := db.Pool.Begin(ctx)
if err != nil {
- handler.AbortRequest(w, context, err, handler.ErrGeneral)
+ handler.AbortRequest(w, logContext, err, handler.ErrGeneral)
return
}
- defer tx.Rollback(db.Ctx)
+ defer tx.Rollback(ctx)
- if err := config.SetString_tx(tx, "licenseFile", buf.String()); err != nil {
- handler.AbortRequest(w, context, err, handler.ErrGeneral)
+ if err := config.SetString_tx(ctx, tx, "licenseFile", buf.String()); err != nil {
+ handler.AbortRequest(w, logContext, err, handler.ErrGeneral)
return
}
- tx.Commit(db.Ctx)
-
- // apply new config
- if err := cluster.ConfigChanged(true, false, false); err != nil {
- handler.AbortRequest(w, context, err, handler.ErrGeneral)
+ if err := cluster.ConfigChanged_tx(ctx, tx, true, false, false); err != nil {
+ handler.AbortRequest(w, logContext, err, handler.ErrGeneral)
+ return
+ }
+ if err := tx.Commit(ctx); err != nil {
+ handler.AbortRequest(w, logContext, err, handler.ErrGeneral)
return
}
}
diff --git a/handler/manifest_download/manifest_download.go b/handler/manifest_download/manifest_download.go
new file mode 100644
index 00000000..9815e1ef
--- /dev/null
+++ b/handler/manifest_download/manifest_download.go
@@ -0,0 +1,190 @@
+package manifest_download
+
+import (
+ "encoding/json"
+ "fmt"
+ "net/http"
+ "r3/cache"
+ "r3/config"
+ "r3/handler"
+ "r3/tools"
+ "strings"
+
+ "github.com/gofrs/uuid"
+)
+
+type icon struct {
+ Purpose string `json:"purpose"`
+ Sizes string `json:"sizes"`
+ Src string `json:"src"`
+ Type string `json:"type"`
+}
+type manifest struct {
+ Id string `json:"id"`
+ Name string `json:"name"`
+ ShortName string `json:"short_name"`
+
+ // theming
+ BackgroundColor string `json:"background_color"`
+ Icons []icon `json:"icons"`
+ ThemeColor string `json:"theme_color"`
+
+ // display
+ Display string `json:"display"`
+ Orientation string `json:"orientation"`
+
+ // worker
+ Scope string `json:"scope"`
+ StartUrl string `json:"start_url"`
+}
+
+var (
+ handlerContext = "manifest_download"
+ manifestDefault = manifest{
+ Id: "platform",
+ Name: "REI3",
+ ShortName: "REI3",
+ Scope: "/",
+ StartUrl: "/",
+ Display: "standalone",
+ Orientation: "any",
+ BackgroundColor: "#f5f5f5",
+ ThemeColor: "#444444",
+ Icons: []icon{
+ {Purpose: "any", Sizes: "192x192", Src: "/images/icon_fav192.png", Type: "image/png"},
+ {Purpose: "any", Sizes: "512x512", Src: "/images/icon_fav512.png", Type: "image/png"},
+ {Purpose: "maskable", Sizes: "192x192", Src: "/images/icon_mask192.png", Type: "image/png"},
+ {Purpose: "maskable", Sizes: "512x512", Src: "/images/icon_mask512.png", Type: "image/png"},
+ },
+ }
+)
+
+func Handler(w http.ResponseWriter, r *http.Request) {
+
+ if r.Method != "GET" {
+ handler.AbortRequestNoLog(w, handler.ErrGeneral)
+ return
+ }
+
+ /*
+ Parse URL, such as:
+ GET /manifests/
+ GET /manifests/123e4567-e89b-12d3-a456-426614174000
+
+ The first is for the generic platform manifest the other for the module specific one
+ */
+ elements := strings.Split(r.URL.Path, "/")
+
+ if len(elements) != 3 {
+ handler.AbortRequestNoLog(w, handler.ErrGeneral)
+ return
+ }
+
+ // platform PWA
+ if elements[2] == "" {
+ manifestApp := manifestDefault
+ if config.GetLicenseActive() {
+ if config.GetString("appName") != "" {
+ manifestApp.Name = tools.Substring(config.GetString("appName"), 0, 60)
+ }
+ if config.GetString("appNameShort") != "" {
+ manifestApp.ShortName = tools.Substring(config.GetString("appNameShort"), 0, 12)
+ }
+ if config.GetString("companyColorHeader") != "" {
+ manifestApp.ThemeColor = fmt.Sprintf("#%s", config.GetString("companyColorHeader"))
+ }
+ if config.GetString("iconPwa1") != "" && config.GetString("iconPwa2") != "" {
+ manifestApp.Icons = []icon{
+ {Purpose: "any", Sizes: "192x192", Src: fmt.Sprintf("data:image/png;base64,%s", config.GetString("iconPwa1")), Type: "image/png"},
+ {Purpose: "any", Sizes: "512x512", Src: fmt.Sprintf("data:image/png;base64,%s", config.GetString("iconPwa2")), Type: "image/png"},
+ }
+ }
+ }
+
+ payloadJson, err := json.Marshal(manifestApp)
+ if err != nil {
+ handler.AbortRequest(w, handlerContext, err, handler.ErrGeneral)
+ return
+ }
+
+ // deliver manifest
+ w.Header().Set("Content-Type", "application/json")
+ w.WriteHeader(http.StatusOK)
+ w.Write(payloadJson)
+ return
+ }
+
+ // module PWA
+ cache.Schema_mx.RLock()
+ defer cache.Schema_mx.RUnlock()
+
+ moduleId, err := uuid.FromString(elements[2])
+ if err != nil {
+ handler.AbortRequest(w, handlerContext, err, handler.ErrGeneral)
+ return
+ }
+
+ module, exists := cache.ModuleIdMap[moduleId]
+ if !exists {
+ handler.AbortRequest(w, handlerContext, handler.ErrSchemaUnknownModule(moduleId), handler.ErrGeneral)
+ return
+ }
+
+ // check for module parent
+ parentName := module.Name
+ if module.ParentId.Valid {
+ parent, exists := cache.ModuleIdMap[module.ParentId.Bytes]
+ if !exists {
+ handler.AbortRequest(w, handlerContext, handler.ErrSchemaUnknownModule(module.ParentId.Bytes), handler.ErrGeneral)
+ return
+ }
+ parentName = parent.Name
+ }
+
+ // overwrite module PWA settings
+ pathMod := fmt.Sprintf("/#/app/%s/%s", parentName, module.Name)
+ manifestMod := manifestDefault
+ manifestMod.Id = module.Id.String()
+ manifestMod.Scope = pathMod
+ manifestMod.StartUrl = pathMod
+
+ if module.Color1.Valid {
+ manifestMod.ThemeColor = fmt.Sprintf("#%s", module.Color1.String)
+ }
+
+ // optional PWA settings
+ if module.NamePwa.Valid {
+ manifestMod.Name = module.NamePwa.String
+ }
+ if module.NamePwaShort.Valid {
+ manifestMod.ShortName = module.NamePwaShort.String
+ }
+ if module.IconIdPwa1.Valid && module.IconIdPwa2.Valid {
+ iconPwa1, err := cache.GetPwaIcon(module.IconIdPwa1.Bytes)
+ if err != nil {
+ handler.AbortRequest(w, handlerContext, err, handler.ErrGeneral)
+ return
+ }
+ iconPwa2, err := cache.GetPwaIcon(module.IconIdPwa2.Bytes)
+ if err != nil {
+ handler.AbortRequest(w, handlerContext, err, handler.ErrGeneral)
+ return
+ }
+
+ manifestMod.Icons = []icon{
+ {Purpose: "any", Sizes: "192x192", Src: fmt.Sprintf("data:image/png;base64,%s", iconPwa1), Type: "image/png"},
+ {Purpose: "any", Sizes: "512x512", Src: fmt.Sprintf("data:image/png;base64,%s", iconPwa2), Type: "image/png"},
+ }
+ }
+
+ payloadJson, err := json.Marshal(manifestMod)
+ if err != nil {
+ handler.AbortRequest(w, handlerContext, err, handler.ErrGeneral)
+ return
+ }
+
+ // deliver manifest
+ w.Header().Set("Content-Type", "application/json")
+ w.WriteHeader(http.StatusOK)
+ w.Write(payloadJson)
+}
diff --git a/handler/transfer_export/transfer_export.go b/handler/transfer_export/transfer_export.go
index df498532..a3db2f81 100644
--- a/handler/transfer_export/transfer_export.go
+++ b/handler/transfer_export/transfer_export.go
@@ -1,10 +1,12 @@
package transfer_export
import (
+ "context"
"errors"
"net/http"
"os"
"r3/config"
+ "r3/db"
"r3/handler"
"r3/log"
"r3/login/login_auth"
@@ -23,11 +25,14 @@ func Handler(w http.ResponseWriter, r *http.Request) {
return
}
+ ctx, ctxCanc := context.WithTimeout(context.Background(), db.CtxDefTimeoutTransfer)
+ defer ctxCanc()
+
// check token
var loginId int64
var admin bool
var noAuth bool
- if _, err := login_auth.Token(token, &loginId, &admin, &noAuth); err != nil {
+ if _, _, err := login_auth.Token(ctx, token, &loginId, &admin, &noAuth); err != nil {
log.Error("server", genErr, err)
return
}
@@ -50,7 +55,7 @@ func Handler(w http.ResponseWriter, r *http.Request) {
return
}
- if err := transfer.ExportToFile(moduleId, filePath); err != nil {
+ if err := transfer.ExportToFile(ctx, moduleId, filePath); err != nil {
log.Error("server", genErr, err)
return
}
diff --git a/handler/transfer_import/transfer_import.go b/handler/transfer_import/transfer_import.go
index 88fd965d..7ea31be2 100644
--- a/handler/transfer_import/transfer_import.go
+++ b/handler/transfer_import/transfer_import.go
@@ -2,12 +2,14 @@ package transfer_import
import (
"bytes"
+ "context"
"encoding/json"
"errors"
"io"
"net/http"
"os"
"r3/config"
+ "r3/db"
"r3/handler"
"r3/log"
"r3/login/login_auth"
@@ -64,11 +66,14 @@ func Handler(res http.ResponseWriter, req *http.Request) {
continue
}
+ ctx, ctxCanc := context.WithTimeout(context.Background(), db.CtxDefTimeoutTransfer)
+ defer ctxCanc()
+
// check token
var loginId int64
var admin bool
var noAuth bool
- if _, err := login_auth.Token(token, &loginId, &admin, &noAuth); err != nil {
+ if _, _, err := login_auth.Token(ctx, token, &loginId, &admin, &noAuth); err != nil {
finishRequest(err)
return
}
@@ -99,7 +104,18 @@ func Handler(res http.ResponseWriter, req *http.Request) {
return
}
- if err := transfer.ImportFromFiles([]string{filePath}); err != nil {
+ tx, err := db.Pool.Begin(ctx)
+ if err != nil {
+ finishRequest(err)
+ return
+ }
+ defer tx.Rollback(ctx)
+
+ if err := transfer.ImportFromFiles_tx(ctx, tx, []string{filePath}); err != nil {
+ finishRequest(err)
+ return
+ }
+ if err := tx.Commit(ctx); err != nil {
finishRequest(err)
return
}
diff --git a/handler/websocket/websocket.go b/handler/websocket/websocket.go
index 8bd0b2c4..61ab9155 100644
--- a/handler/websocket/websocket.go
+++ b/handler/websocket/websocket.go
@@ -7,12 +7,18 @@ import (
"net"
"net/http"
"r3/bruteforce"
+ "r3/cache"
"r3/cluster"
+ "r3/config"
"r3/handler"
"r3/log"
+ "r3/login/login_session"
"r3/request"
"r3/types"
+ "strings"
"sync"
+ "sync/atomic"
+ "time"
"github.com/gofrs/uuid"
"github.com/gorilla/websocket"
@@ -20,15 +26,19 @@ import (
// a websocket client
type clientType struct {
- address string // IP address, no port
- admin bool // belongs to admin login?
- ctx context.Context // global context for client requests
- ctxCancel context.CancelFunc // to abort requests in case of disconnect
- fixedToken bool // logged in with fixed token (limited access, only auth and server messages)
- loginId int64 // client login ID, 0 = not logged in yet
- noAuth bool // logged in without authentication (public auth, username only)
- write_mx sync.Mutex // to force sequential writes
- ws *websocket.Conn // websocket connection
+ id uuid.UUID // unique ID for client (for registering/de-registering login sessions)
+ address string // IP address, no port
+ admin bool // belongs to admin login?
+ ctx context.Context // context for requests from this client
+ ctxCancel context.CancelFunc // to abort requests in case of disconnect
+ device types.WebsocketClientDevice // client device type (browser, fatClient)
+ ioFailure atomic.Bool // client failed to read/write
+ local bool // client is local (::1, 127.0.0.1)
+ loginId int64 // client login ID, 0 = not logged in yet
+ noAuth bool // logged in without authentication (public auth, username only)
+ pwaModuleId uuid.UUID // ID of module for direct app access via subdomain, nil UUID if not used
+ write_mx sync.Mutex // to force sequential writes
+ ws *websocket.Conn // websocket connection
}
// a hub for all active websocket clients
@@ -74,53 +84,80 @@ func Handler(w http.ResponseWriter, r *http.Request) {
return
}
- ws, err := clientUpgrader.Upgrade(w, r, nil)
+ // get client host address
+ host, _, err := net.SplitHostPort(r.RemoteAddr)
if err != nil {
handler.AbortRequest(w, handlerContext, err, handler.ErrGeneral)
return
}
- // get client host address
- host, _, err := net.SplitHostPort(r.RemoteAddr)
+ // create unique client ID for session tracking
+ clientId, err := uuid.NewV4()
if err != nil {
handler.AbortRequest(w, handlerContext, err, handler.ErrGeneral)
- ws.Close()
return
}
- log.Info(handlerContext, fmt.Sprintf("new client connecting from %s", host))
+ // upgrade to websocket
+ ws, err := clientUpgrader.Upgrade(w, r, nil)
+ if err != nil {
+ handler.AbortRequest(w, handlerContext, err, handler.ErrGeneral)
+ return
+ }
// create global request context with abort function
ctx, ctxCancel := context.WithCancel(context.Background())
-
client := &clientType{
- address: host,
- admin: false,
- ctx: ctx,
- ctxCancel: ctxCancel,
- fixedToken: false,
- loginId: 0,
- noAuth: false,
- write_mx: sync.Mutex{},
- ws: ws,
+ id: clientId,
+ address: host,
+ admin: false,
+ ctx: ctx,
+ ctxCancel: ctxCancel,
+ device: types.WebsocketClientDeviceBrowser,
+ local: host == "::1" || host == "127.0.0.1",
+ loginId: 0,
+ noAuth: false,
+ pwaModuleId: cache.GetPwaModuleId(strings.Split(r.Host, ".")[0]), // assign PWA module ID if host matches any defined PWA direct app access rule
+ write_mx: sync.Mutex{},
+ ws: ws,
}
- hub.clientAdd <- client
+ if r.Header.Get("User-Agent") == "r3-client-fat" {
+ client.device = types.WebsocketClientDeviceFatClient
+ }
+ hub.clientAdd <- client
go client.read()
}
func (hub *hubType) start() {
- var removeClient = func(client *clientType) {
- if _, exists := hub.clients[client]; exists {
- log.Info(handlerContext, fmt.Sprintf("disconnecting client at %s", client.address))
- client.ws.WriteMessage(websocket.CloseMessage, []byte{}) // optional
- client.ws.Close()
- client.ctxCancel()
- delete(hub.clients, client)
- cluster.SetWebsocketClientCount(len(hub.clients))
+ var clientRemove = func(client *clientType, wasKicked bool) {
+ if _, exists := hub.clients[client]; !exists {
+ return
}
+
+ if !client.ioFailure.Load() {
+ client.write_mx.Lock()
+ client.ws.WriteMessage(websocket.CloseMessage, []byte{})
+ client.write_mx.Unlock()
+ }
+ client.ws.Close()
+ client.ctxCancel()
+ delete(hub.clients, client)
+
+ if wasKicked {
+ log.Info(handlerContext, fmt.Sprintf("kicked client (login ID %d) at %s", client.loginId, client.address))
+ } else {
+ log.Info(handlerContext, fmt.Sprintf("disconnected client (login ID %d) at %s", client.loginId, client.address))
+ }
+
+ go func() {
+ // run DB calls in async func as they must not block hub operations during heavy DB load
+ if err := login_session.LogRemove(client.id); err != nil {
+ log.Error(handlerContext, "failed to remove login session log", err)
+ }
+ }()
}
for {
@@ -128,75 +165,94 @@ func (hub *hubType) start() {
select {
case client := <-hub.clientAdd:
hub.clients[client] = true
- cluster.SetWebsocketClientCount(len(hub.clients))
case client := <-hub.clientDel:
- removeClient(client)
+ clientRemove(client, false)
case event := <-cluster.WebsocketClientEvents:
- jsonMsg := []byte{} // message back to client
- kickEvent := event.Kick || event.KickNonAdmin
+ // prepare json message for client(s) based on event content
+ var err error = nil
+ jsonMsg := []byte{} // message back to client
+ singleRecipient := false // message is only sent to single recipient (first valid one)
+
+ switch event.Content {
+ case "clientEventsChanged":
+ jsonMsg, err = prepareUnrequested("clientEventsChanged", nil)
+ case "collectionChanged":
+ jsonMsg, err = prepareUnrequested("collectionChanged", event.Payload)
+ case "configChanged":
+ jsonMsg, err = prepareUnrequested("configChanged", nil)
+ case "filesCopied":
+ jsonMsg, err = prepareUnrequested("filesCopied", event.Payload)
+ case "fileRequested":
+ jsonMsg, err = prepareUnrequested("fileRequested", event.Payload)
+ case "jsFunctionCalled":
+ jsonMsg, err = prepareUnrequested("jsFunctionCalled", event.Payload)
+ singleRecipient = true
+ case "keystrokesRequested":
+ jsonMsg, err = prepareUnrequested("keystrokesRequested", event.Payload)
+ singleRecipient = true
+ case "renew":
+ jsonMsg, err = prepareUnrequested("reauthorized", nil)
+ case "schemaLoaded":
+ data := struct {
+ ModuleIdMapData map[uuid.UUID]types.ModuleMeta `json:"moduleIdMapData"`
+ PresetIdMapRecordId map[uuid.UUID]int64 `json:"presetIdMapRecordId"`
+ CaptionMapCustom types.CaptionMapsAll `json:"captionMapCustom"`
+ }{
+ ModuleIdMapData: cache.GetModuleIdMapMeta(),
+ PresetIdMapRecordId: cache.GetPresetRecordIds(),
+ CaptionMapCustom: cache.GetCaptionMapCustom(),
+ }
+ jsonMsg, err = prepareUnrequested("schemaLoaded", data)
+ case "schemaLoading":
+ jsonMsg, err = prepareUnrequested("schemaLoading", nil)
+ }
- if !kickEvent {
- // if clients are not kicked, prepare response
- var err error
+ if err != nil {
+ log.Error(handlerContext, "could not prepare unrequested transaction", err)
+ continue
+ }
- if event.CollectionChanged != uuid.Nil {
- jsonMsg, err = prepareUnrequested("collection_changed", event.CollectionChanged)
- }
- if event.ConfigChanged {
- jsonMsg, err = prepareUnrequested("config_changed", nil)
- }
- if event.FilesCopiedAttributeId != uuid.Nil {
- jsonMsg, err = prepareUnrequested("files_copied", types.ClusterEventFilesCopied{
- AttributeId: event.FilesCopiedAttributeId,
- FileIds: event.FilesCopiedFileIds,
- RecordId: event.FilesCopiedRecordId,
- })
- }
- if event.FileRequestedAttributeId != uuid.Nil {
- jsonMsg, err = prepareUnrequested("fileRequested", types.ClusterEventFileRequested{
- AttributeId: event.FileRequestedAttributeId,
- ChooseApp: event.FileRequestedChooseApp,
- FileId: event.FileRequestedFileId,
- FileHash: event.FileRequestedFileHash,
- FileName: event.FileRequestedFileName,
- })
- }
- if event.Renew {
- jsonMsg, err = prepareUnrequested("reauthorized", nil)
- }
- if event.SchemaLoading {
- jsonMsg, err = prepareUnrequested("schema_loading", nil)
- }
- if event.SchemaTimestamp != 0 {
- jsonMsg, err = prepareUnrequested("schema_loaded", event.SchemaTimestamp)
+ clientsSend := make([]*clientType, 0)
+ clientsSendFallback := make([]*clientType, 0)
+ eventLocal := event.Target.Address == "::1" || event.Target.Address == "127.0.0.1"
+
+ for client := range hub.clients {
+ bothLocal := eventLocal && client.local
+
+ // skip if strict target filter does not apply to client
+ if (event.Target.Address != "" && event.Target.Address != client.address && !bothLocal) ||
+ (event.Target.Device != 0 && event.Target.Device != client.device) ||
+ (event.Target.LoginId != 0 && event.Target.LoginId != client.loginId) {
+ continue
}
- if err != nil {
- log.Error(handlerContext, "could not prepare unrequested transaction", err)
+
+ // store as fallback if preferred target filter does apply to client
+ // fallback clients are only used if no other clients match the target filters
+ if event.Target.PwaModuleIdPreferred != uuid.Nil && event.Target.PwaModuleIdPreferred != client.pwaModuleId {
+ clientsSendFallback = append(clientsSendFallback, client)
continue
}
+ clientsSend = append(clientsSend, client)
}
- for client, _ := range hub.clients {
+ if len(clientsSend) == 0 && len(clientsSendFallback) != 0 {
+ clientsSend = clientsSendFallback
+ }
- // login ID 0 affects all
- if event.LoginId != 0 && event.LoginId != client.loginId {
- continue
- }
+ for _, client := range clientsSend {
- // non-kick event, send message
- if !kickEvent {
- go client.write(jsonMsg)
+ // disconnect and do not send message if kicked
+ if event.Content == "kick" || (event.Content == "kickNonAdmin" && !client.admin) {
+ clientRemove(client, true)
+ continue
}
+ go client.write(jsonMsg)
- // kick client, if requested
- if event.Kick || (event.KickNonAdmin && !client.admin) {
- log.Info(handlerContext, fmt.Sprintf("kicking client (login ID %d)",
- client.loginId))
-
- removeClient(client)
+ if singleRecipient {
+ break
}
}
}
@@ -207,6 +263,7 @@ func (client *clientType) read() {
for {
_, message, err := client.ws.ReadMessage()
if err != nil {
+ client.ioFailure.Store(true)
hub.clientDel <- client
return
}
@@ -223,6 +280,7 @@ func (client *clientType) write(message []byte) {
defer client.write_mx.Unlock()
if err := client.ws.WriteMessage(websocket.TextMessage, message); err != nil {
+ client.ioFailure.Store(true)
hub.clientDel <- client
return
}
@@ -235,6 +293,7 @@ func (client *clientType) handleTransaction(reqTransJson json.RawMessage) json.R
}()
var (
+ err error
reqTrans types.RequestTransaction
resTrans types.ResponseTransaction
)
@@ -251,20 +310,35 @@ func (client *clientType) handleTransaction(reqTransJson json.RawMessage) json.R
// take over transaction number for response so client can match it locally
resTrans.TransactionNr = reqTrans.TransactionNr
+ // inherit the client context, to abort if the client is disconnected
+ ctx, ctxCanc := context.WithTimeout(client.ctx,
+ time.Duration(int64(config.GetUint64("dbTimeoutDataWs")))*time.Second)
+
+ defer ctxCanc()
+
// client can either authenticate or execute requests
authRequest := len(reqTrans.Requests) == 1 && reqTrans.Requests[0].Ressource == "auth"
if !authRequest {
- if client.fixedToken {
- log.Warning(handlerContext, "blocked client request",
- fmt.Errorf("only authentication allowed for fixed token clients"))
+ // execute non-authentication transaction
+ resTrans.Responses, err = request.ExecTransaction(ctx, client.address, client.loginId,
+ client.admin, client.device, client.noAuth, reqTrans, false)
- return []byte("{}")
- }
+ if err != nil {
+ if handler.CheckForDbsCacheErrCode(err) {
+ // known PGX cache error, repeat with cleared DB statement/description cache
+ resTrans.Responses, err = request.ExecTransaction(ctx, client.address, client.loginId,
+ client.admin, client.device, client.noAuth, reqTrans, true)
- // execute non-authentication transaction
- resTrans = request.ExecTransaction(client.ctx, client.loginId,
- client.admin, client.noAuth, reqTrans, resTrans)
+ if err != nil {
+ resTrans.Responses = make([]types.Response, 0)
+ resTrans.Error = err.Error()
+ }
+ } else {
+ resTrans.Responses = make([]types.Response, 0)
+ resTrans.Error = err.Error()
+ }
+ }
} else {
// execute authentication request
@@ -281,24 +355,27 @@ func (client *clientType) handleTransaction(reqTransJson json.RawMessage) json.R
switch req.Action {
case "token": // authentication via JSON web token
- resPayload, err = request.LoginAuthToken(req.Payload, &client.loginId,
- &client.admin, &client.noAuth)
+ resPayload, err = request.LoginAuthToken(ctx, req.Payload, &client.loginId, &client.admin, &client.noAuth)
- case "tokenFixed": // authentication via fixed token (fat-client)
- resPayload, err = request.LoginAuthTokenFixed(req.Payload, &client.loginId)
- if err == nil {
- client.fixedToken = true
- }
+ case "tokenFixed": // authentication via fixed token (fat-client only)
+ resPayload, err = request.LoginAuthTokenFixed(ctx, req.Payload, &client.loginId)
+ client.device = types.WebsocketClientDeviceFatClient
case "user": // authentication via credentials
- resPayload, err = request.LoginAuthUser(req.Payload, &client.loginId,
- &client.admin, &client.noAuth)
+ resPayload, err = request.LoginAuthUser(ctx, req.Payload, &client.loginId, &client.admin, &client.noAuth)
}
if err != nil {
log.Warning(handlerContext, "failed to authenticate user", err)
bruteforce.BadAttemptByHost(client.address)
- resTrans.Error = "AUTH_ERROR"
+
+ if handler.CheckForLicenseErrCode(err) {
+ // license errors are relevant to the client
+ resTrans.Error = err.Error()
+ } else {
+ // any other error is not relevant to the client and could reveal internals
+ resTrans.Error = "AUTH_ERROR"
+ }
} else {
var res types.Response
res.Payload, err = json.Marshal(resPayload)
@@ -309,9 +386,14 @@ func (client *clientType) handleTransaction(reqTransJson json.RawMessage) json.R
}
}
- if resTrans.Error == "" {
- log.Info(handlerContext, fmt.Sprintf("authenticated client (login ID %d, admin: %v)",
- client.loginId, client.admin))
+ // authentication can return with no error but incomplete if MFA is on but 2nd factor not provided yet
+ // in this case the login ID is still 0
+ if resTrans.Error == "" && client.loginId != 0 {
+ log.Info(handlerContext, fmt.Sprintf("authenticated client (login ID %d, admin: %v)", client.loginId, client.admin))
+
+ if err := login_session.Log(client.id, client.loginId, client.address, client.device); err != nil {
+ log.Error(handlerContext, "failed to create login session log", err)
+ }
}
}
diff --git a/image/image_detect.go b/image/image_detect.go
deleted file mode 100644
index 68d528fe..00000000
--- a/image/image_detect.go
+++ /dev/null
@@ -1,22 +0,0 @@
-package image
-
-import (
- "net/http"
- "os"
-)
-
-func detectType(filePath string) (string, error) {
- file, err := os.Open(filePath)
- if err != nil {
- return "", err
- }
- defer file.Close()
-
- // read first 512 bytes to detect content type
- // http://golang.org/pkg/net/http/#DetectContentType
- fileBytes := make([]byte, 512)
- if _, err := file.Read(fileBytes); err != nil {
- return "", err
- }
- return http.DetectContentType(fileBytes), nil
-}
diff --git a/ldap/ldap.go b/ldap/ldap.go
index 1a5447dd..c86417f2 100644
--- a/ldap/ldap.go
+++ b/ldap/ldap.go
@@ -1,51 +1,89 @@
package ldap
import (
- "r3/db"
+ "context"
+ "r3/cache"
+ "r3/login"
"r3/types"
+ "strings"
"github.com/jackc/pgx/v5"
)
-func Del_tx(tx pgx.Tx, id int32) error {
- _, err := tx.Exec(db.Ctx, `
+func Del_tx(ctx context.Context, tx pgx.Tx, id int32) error {
+
+ if err := login.DelByLdap_tx(ctx, tx, id); err != nil {
+ return err
+ }
+
+ _, err := tx.Exec(ctx, `
DELETE FROM instance.ldap
WHERE id = $1
`, id)
return err
}
-func Get() ([]types.Ldap, error) {
+func Get_tx(ctx context.Context, tx pgx.Tx) ([]types.Ldap, error) {
ldaps := make([]types.Ldap, 0)
- rows, err := db.Pool.Query(db.Ctx, `
- SELECT id, login_template_id, name, host, port, bind_user_dn,
- bind_user_pw, search_class, search_dn, key_attribute,
- login_attribute, member_attribute, assign_roles, ms_ad_ext,
- starttls, tls, tls_verify
- FROM instance.ldap
- ORDER BY name ASC
+ rows, err := tx.Query(ctx, `
+ SELECT
+ l.id,
+ l.login_template_id,
+ l.name,
+ l.host,
+ l.port,
+ l.bind_user_dn,
+ l.bind_user_pw,
+ l.search_class,
+ l.search_dn,
+ l.key_attribute,
+ l.login_attribute,
+ l.member_attribute,
+ l.assign_roles,
+ l.ms_ad_ext,
+ l.starttls,
+ l.tls,
+ l.tls_verify,
+ COALESCE(m.department, ''),
+ COALESCE(m.email, ''),
+ COALESCE(m.location, ''),
+ COALESCE(m.name_display, ''),
+ COALESCE(m.name_fore, ''),
+ COALESCE(m.name_sur, ''),
+ COALESCE(m.notes, ''),
+ COALESCE(m.organization, ''),
+ COALESCE(m.phone_fax, ''),
+ COALESCE(m.phone_landline, ''),
+ COALESCE(m.phone_mobile, '')
+ FROM instance.ldap AS l
+ LEFT JOIN instance.ldap_attribute_login_meta AS m ON m.ldap_id = l.id
+ ORDER BY l.name ASC
`)
if err != nil {
return ldaps, err
}
+ defer rows.Close()
for rows.Next() {
var l types.Ldap
+ var m types.LoginMeta
if err := rows.Scan(&l.Id, &l.LoginTemplateId, &l.Name, &l.Host,
&l.Port, &l.BindUserDn, &l.BindUserPw, &l.SearchClass, &l.SearchDn,
&l.KeyAttribute, &l.LoginAttribute, &l.MemberAttribute,
- &l.AssignRoles, &l.MsAdExt, &l.Starttls, &l.Tls, &l.TlsVerify); err != nil {
+ &l.AssignRoles, &l.MsAdExt, &l.Starttls, &l.Tls, &l.TlsVerify,
+ &m.Department, &m.Email, &m.Location, &m.NameDisplay, &m.NameFore,
+ &m.NameSur, &m.Notes, &m.Organization, &m.PhoneFax, &m.PhoneLandline,
+ &m.PhoneMobile); err != nil {
- rows.Close()
return ldaps, err
}
+ l.LoginMetaAttributes = m
ldaps = append(ldaps, l)
}
- rows.Close()
for i, _ := range ldaps {
- ldaps[i].Roles, err = getRoles(ldaps[i].Id)
+ ldaps[i].Roles, err = getRoles_tx(ctx, tx, ldaps[i].Id)
if err != nil {
return ldaps, err
}
@@ -53,10 +91,10 @@ func Get() ([]types.Ldap, error) {
return ldaps, nil
}
-func Set_tx(tx pgx.Tx, l types.Ldap) error {
+func Set_tx(ctx context.Context, tx pgx.Tx, l types.Ldap) error {
if l.Id == 0 {
- if err := tx.QueryRow(db.Ctx, `
+ if err := tx.QueryRow(ctx, `
INSERT INTO instance.ldap (
login_template_id, name, host, port, bind_user_dn, bind_user_pw,
search_class, search_dn, key_attribute, login_attribute,
@@ -72,7 +110,7 @@ func Set_tx(tx pgx.Tx, l types.Ldap) error {
return err
}
} else {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
UPDATE instance.ldap
SET login_template_id = $1, name = $2, host = $3, port = $4,
bind_user_dn = $5, bind_user_pw = $6, search_class = $7,
@@ -89,8 +127,12 @@ func Set_tx(tx pgx.Tx, l types.Ldap) error {
}
}
+ if err := setLoginMetaAttributes_tx(ctx, tx, l.Id, l.LoginMetaAttributes); err != nil {
+ return err
+ }
+
// update LDAP role assignment
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
DELETE FROM instance.ldap_role
WHERE ldap_id = $1
`, l.Id); err != nil {
@@ -98,7 +140,7 @@ func Set_tx(tx pgx.Tx, l types.Ldap) error {
}
for _, role := range l.Roles {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
INSERT INTO instance.ldap_role (ldap_id, role_id, group_dn)
VALUES ($1,$2,$3)
`, l.Id, role.RoleId, role.GroupDn); err != nil {
@@ -107,14 +149,74 @@ func Set_tx(tx pgx.Tx, l types.Ldap) error {
}
return nil
}
+func UpdateCache_tx(ctx context.Context, tx pgx.Tx) error {
+ ldaps, err := Get_tx(ctx, tx)
+ if err != nil {
+ return err
+ }
+ cache.SetLdaps(ldaps)
+ return nil
+}
+
+func setLoginMetaAttributes_tx(ctx context.Context, tx pgx.Tx, ldapId int32, m types.LoginMeta) error {
+ var exists bool
+ if err := tx.QueryRow(ctx, `
+ SELECT EXISTS(
+ SELECT ldap_id
+ FROM instance.ldap_attribute_login_meta
+ WHERE ldap_id = $1
+ )
+ `, ldapId).Scan(&exists); err != nil {
+ return err
+ }
+
+ // trim whitespaces from attributes
+ m.Department = strings.TrimSpace(m.Department)
+ m.Email = strings.TrimSpace(m.Email)
+ m.Location = strings.TrimSpace(m.Location)
+ m.NameDisplay = strings.TrimSpace(m.NameDisplay)
+ m.NameFore = strings.TrimSpace(m.NameFore)
+ m.NameSur = strings.TrimSpace(m.NameSur)
+ m.Notes = strings.TrimSpace(m.Notes)
+ m.Organization = strings.TrimSpace(m.Organization)
+ m.PhoneFax = strings.TrimSpace(m.PhoneFax)
+ m.PhoneLandline = strings.TrimSpace(m.PhoneLandline)
+ m.PhoneMobile = strings.TrimSpace(m.PhoneMobile)
+
+ var err error
+ if !exists {
+ _, err = tx.Exec(ctx, `
+ INSERT INTO instance.ldap_attribute_login_meta (
+ ldap_id, department, email, location, name_display,
+ name_fore, name_sur, notes, organization, phone_fax,
+ phone_landline, phone_mobile
+ )
+ VALUES ($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12)
+ `, ldapId, m.Department, m.Email, m.Location, m.NameDisplay,
+ m.NameFore, m.NameSur, m.Notes, m.Organization, m.PhoneFax,
+ m.PhoneLandline, m.PhoneMobile)
+ } else {
+ _, err = tx.Exec(ctx, `
+ UPDATE instance.ldap_attribute_login_meta
+ SET department = $1, email = $2, location = $3, name_display = $4,
+ name_fore = $5, name_sur = $6, notes = $7, organization = $8,
+ phone_fax = $9, phone_landline = $10, phone_mobile = $11
+ WHERE ldap_id = $12
+ `, m.Department, m.Email, m.Location, m.NameDisplay, m.NameFore,
+ m.NameSur, m.Notes, m.Organization, m.PhoneFax, m.PhoneLandline,
+ m.PhoneMobile, ldapId)
+ }
+ return err
+}
-func getRoles(ldapId int32) ([]types.LdapRole, error) {
+func getRoles_tx(ctx context.Context, tx pgx.Tx, ldapId int32) ([]types.LdapRole, error) {
roles := make([]types.LdapRole, 0)
- rows, err := db.Pool.Query(db.Ctx, `
+ rows, err := tx.Query(ctx, `
SELECT role_id, group_dn
FROM instance.ldap_role
WHERE ldap_id = $1
+ ORDER BY group_dn
`, ldapId)
if err != nil {
return roles, err
diff --git a/ldap/ldap_import/ldap_import.go b/ldap/ldap_import/ldap_import.go
index 8f5127df..3b71ef13 100644
--- a/ldap/ldap_import/ldap_import.go
+++ b/ldap/ldap_import/ldap_import.go
@@ -5,14 +5,12 @@ import (
"errors"
"fmt"
"r3/cache"
- "r3/cluster"
"r3/config"
- "r3/db"
"r3/ldap/ldap_conn"
"r3/log"
"r3/login"
- "r3/tools"
"r3/types"
+ "slices"
"unicode/utf8"
goldap "github.com/go-ldap/ldap/v3"
@@ -22,14 +20,17 @@ import (
type loginType struct {
active bool
name string
+ meta types.LoginMeta
roleIds []uuid.UUID
}
-var msAdExtDisabledAtrFlags = []string{"514", "546", "66050",
- "66082", "262658", "262690", "328194", "328226"}
+var (
+ msAdExtDisabledAtrFlags = []string{"514", "546", "66050",
+ "66082", "262658", "262690", "328194", "328226"}
+ pageSize uint32 = 30
+)
func RunAll() error {
-
ldapIdMap := cache.GetLdapIdMap()
if len(ldapIdMap) != 0 && !config.GetLicenseActive() {
@@ -38,14 +39,14 @@ func RunAll() error {
}
for _, ldap := range ldapIdMap {
- if err := Run(ldap.Id); err != nil {
+ if err := run(ldap.Id); err != nil {
return err
}
}
return nil
}
-func Run(ldapId int32) error {
+func run(ldapId int32) error {
ldapConn, ldap, err := ldap_conn.ConnectAndBind(ldapId)
if err != nil {
@@ -61,9 +62,40 @@ func Run(ldapId int32) error {
attributes = append(attributes, "userAccountControl")
}
- // controls for paged requests
- pagingControl := goldap.NewControlPaging(30)
- controls := []goldap.Control{pagingControl}
+ // add login meta attributes if set
+ if ldap.LoginMetaAttributes.Department != "" {
+ attributes = append(attributes, ldap.LoginMetaAttributes.Department)
+ }
+ if ldap.LoginMetaAttributes.Email != "" {
+ attributes = append(attributes, ldap.LoginMetaAttributes.Email)
+ }
+ if ldap.LoginMetaAttributes.Location != "" {
+ attributes = append(attributes, ldap.LoginMetaAttributes.Location)
+ }
+ if ldap.LoginMetaAttributes.NameDisplay != "" {
+ attributes = append(attributes, ldap.LoginMetaAttributes.NameDisplay)
+ }
+ if ldap.LoginMetaAttributes.NameFore != "" {
+ attributes = append(attributes, ldap.LoginMetaAttributes.NameFore)
+ }
+ if ldap.LoginMetaAttributes.NameSur != "" {
+ attributes = append(attributes, ldap.LoginMetaAttributes.NameSur)
+ }
+ if ldap.LoginMetaAttributes.Notes != "" {
+ attributes = append(attributes, ldap.LoginMetaAttributes.Notes)
+ }
+ if ldap.LoginMetaAttributes.Organization != "" {
+ attributes = append(attributes, ldap.LoginMetaAttributes.Organization)
+ }
+ if ldap.LoginMetaAttributes.PhoneFax != "" {
+ attributes = append(attributes, ldap.LoginMetaAttributes.PhoneFax)
+ }
+ if ldap.LoginMetaAttributes.PhoneLandline != "" {
+ attributes = append(attributes, ldap.LoginMetaAttributes.PhoneLandline)
+ }
+ if ldap.LoginMetaAttributes.PhoneMobile != "" {
+ attributes = append(attributes, ldap.LoginMetaAttributes.PhoneMobile)
+ }
// MS AD: we have two choices to lookup nested groups
// 1. lookup memberships of user (member attribute with LDAP_MATCHING_RULE_IN_CHAIN)
@@ -111,6 +143,8 @@ func Run(ldapId int32) error {
}
// paged LDAP request
+ pagingControl := goldap.NewControlPaging(pageSize)
+ controls := []goldap.Control{pagingControl}
for {
log.Info("ldap", fmt.Sprintf("querying '%s': '%s' in '%s'",
ldap.Name, filters, ldap.SearchDn))
@@ -129,7 +163,7 @@ func Run(ldapId int32) error {
for _, entry := range response.Entries {
- // key attribute is used to uniquely identifiy an user
+ // key attribute is used to uniquely identify a user
// MS AD uses binary for some (like objectGUID), encode base64 if invalid UTF8
var key string
keyRaw := entry.GetRawAttributeValue(ldap.KeyAttribute)
@@ -149,14 +183,48 @@ func Run(ldapId int32) error {
if ldap.MsAdExt {
for _, value := range entry.GetAttributeValues("userAccountControl") {
- if tools.StringInSlice(value, msAdExtDisabledAtrFlags) {
+ if slices.Contains(msAdExtDisabledAtrFlags, value) {
l.active = false
}
}
}
+ if ldap.LoginMetaAttributes.Department != "" {
+ l.meta.Department = entry.GetAttributeValue(ldap.LoginMetaAttributes.Department)
+ }
+ if ldap.LoginMetaAttributes.Email != "" {
+ l.meta.Email = entry.GetAttributeValue(ldap.LoginMetaAttributes.Email)
+ }
+ if ldap.LoginMetaAttributes.Location != "" {
+ l.meta.Location = entry.GetAttributeValue(ldap.LoginMetaAttributes.Location)
+ }
+ if ldap.LoginMetaAttributes.NameDisplay != "" {
+ l.meta.NameDisplay = entry.GetAttributeValue(ldap.LoginMetaAttributes.NameDisplay)
+ }
+ if ldap.LoginMetaAttributes.NameFore != "" {
+ l.meta.NameFore = entry.GetAttributeValue(ldap.LoginMetaAttributes.NameFore)
+ }
+ if ldap.LoginMetaAttributes.NameSur != "" {
+ l.meta.NameSur = entry.GetAttributeValue(ldap.LoginMetaAttributes.NameSur)
+ }
+ if ldap.LoginMetaAttributes.Notes != "" {
+ l.meta.Notes = entry.GetAttributeValue(ldap.LoginMetaAttributes.Notes)
+ }
+ if ldap.LoginMetaAttributes.Organization != "" {
+ l.meta.Organization = entry.GetAttributeValue(ldap.LoginMetaAttributes.Organization)
+ }
+ if ldap.LoginMetaAttributes.PhoneFax != "" {
+ l.meta.PhoneFax = entry.GetAttributeValue(ldap.LoginMetaAttributes.PhoneFax)
+ }
+ if ldap.LoginMetaAttributes.PhoneLandline != "" {
+ l.meta.PhoneLandline = entry.GetAttributeValue(ldap.LoginMetaAttributes.PhoneLandline)
+ }
+ if ldap.LoginMetaAttributes.PhoneMobile != "" {
+ l.meta.PhoneMobile = entry.GetAttributeValue(ldap.LoginMetaAttributes.PhoneMobile)
+ }
+
// role ID is empty if just users are queried
- if ldap.AssignRoles && role.RoleId != uuid.Nil && !tools.UuidInSlice(role.RoleId, l.roleIds) {
+ if ldap.AssignRoles && role.RoleId != uuid.Nil && !slices.Contains(l.roleIds, role.RoleId) {
l.roleIds = append(l.roleIds, role.RoleId)
}
logins[key] = l
@@ -178,7 +246,9 @@ func Run(ldapId int32) error {
// import logins
for key, l := range logins {
- if err := importLogin(l, key, ldap); err != nil {
+ log.Info("ldap", fmt.Sprintf("processing login '%s' (key: %s, roles: %d)", l.name, key, len(l.roleIds)))
+
+ if err := login.SetLdapLogin(ldap, key, l.name, l.active, l.meta, l.roleIds); err != nil {
log.Warning("ldap", fmt.Sprintf("failed to import login '%s'", l.name), err)
continue
}
@@ -187,42 +257,3 @@ func Run(ldapId int32) error {
log.Info("ldap", fmt.Sprintf("finished login import for '%s'", ldap.Name))
return nil
}
-
-func importLogin(l loginType, key string, ldap types.Ldap) error {
-
- log.Info("ldap", fmt.Sprintf("importing login '%s' (key: %s, roles: %d)",
- l.name, key, len(l.roleIds)))
-
- tx, err := db.Pool.Begin(db.Ctx)
- if err != nil {
- return err
- }
- defer tx.Rollback(db.Ctx)
-
- loginId, changed, err := login.SetLdapLogin_tx(tx, ldap.Id, key, l.name,
- l.active, l.roleIds, ldap.LoginTemplateId, ldap.AssignRoles)
-
- if err != nil {
- return err
- }
-
- // commit before renewing access cache (to apply new permissions)
- if err := tx.Commit(db.Ctx); err != nil {
- return err
- }
-
- if changed {
- if l.active {
- if err := cluster.LoginReauthorized(true, loginId); err != nil {
- log.Warning("ldap", fmt.Sprintf("could not renew access permissions for '%s'",
- l.name), err)
- }
- } else {
- log.Info("ldap", fmt.Sprintf("user account '%s' is locked, kicking active sessions",
- l.name))
-
- cluster.LoginDisabled(true, loginId)
- }
- }
- return nil
-}
diff --git a/log/log.go b/log/log.go
index 23afc018..266ade06 100644
--- a/log/log.go
+++ b/log/log.go
@@ -1,24 +1,27 @@
package log
import (
+ "context"
"fmt"
"r3/db"
"r3/tools"
"r3/types"
"sync"
+ "sync/atomic"
"github.com/gofrs/uuid"
+ "github.com/jackc/pgx/v5"
"github.com/jackc/pgx/v5/pgtype"
)
var (
- access_mx = sync.Mutex{}
- debug = false
- nodeId = pgtype.UUID{} // ID of the current node
-
- outputCli bool // write logs also to command line
+ // simple options, accessible without lock
+ debug atomic.Bool
+ outputCli atomic.Bool // write logs also to command line
// log levels
+ access_mx = sync.RWMutex{}
+ nodeId = pgtype.UUID{} // ID of the current node
contextLevel = map[string]int{
"api": 1,
"backup": 1,
@@ -36,7 +39,7 @@ var (
}
)
-func Get(dateFrom pgtype.Int8, dateTo pgtype.Int8, limit int, offset int,
+func Get_tx(ctx context.Context, tx pgx.Tx, dateFrom pgtype.Int8, dateTo pgtype.Int8, limit int, offset int,
context string, byString string) ([]types.Log, int, error) {
logs := make([]types.Log, 0)
@@ -45,7 +48,7 @@ func Get(dateFrom pgtype.Int8, dateTo pgtype.Int8, limit int, offset int,
var qb tools.QueryBuilder
qb.UseDollarSigns()
qb.AddList("SELECT", []string{"l.level", "l.context", "l.message", "l.date_milli", "COALESCE(m.name,'-')", "n.name"})
- qb.Set("FROM", "instance.log AS l")
+ qb.SetFrom("instance.log AS l")
qb.Add("JOIN", "LEFT JOIN app.module AS m ON m.id = l.module_id")
qb.Add("JOIN", "LEFT JOIN instance_cluster.node AS n ON n.id = l.node_id")
@@ -73,33 +76,30 @@ func Get(dateFrom pgtype.Int8, dateTo pgtype.Int8, limit int, offset int,
}
qb.Add("ORDER", "l.date_milli DESC")
- qb.Set("OFFSET", offset)
- qb.Set("LIMIT", limit)
+ qb.SetOffset(offset)
+ qb.SetLimit(limit)
query, err := qb.GetQuery()
if err != nil {
return nil, 0, err
}
- rows, err := db.Pool.Query(db.Ctx, query, qb.GetParaValues()...)
+ rows, err := tx.Query(ctx, query, qb.GetParaValues()...)
if err != nil {
return nil, 0, err
}
+ defer rows.Close()
for rows.Next() {
var l types.Log
var dateMilli int64
- if err := rows.Scan(&l.Level, &l.Context, &l.Message,
- &dateMilli, &l.ModuleName, &l.NodeName); err != nil {
-
+ if err := rows.Scan(&l.Level, &l.Context, &l.Message, &dateMilli, &l.ModuleName, &l.NodeName); err != nil {
return nil, 0, err
}
-
l.Date = int64(dateMilli / 1000)
logs = append(logs, l)
}
- rows.Close()
// get total count
qb.UseDollarSigns()
@@ -114,23 +114,17 @@ func Get(dateFrom pgtype.Int8, dateTo pgtype.Int8, limit int, offset int,
return nil, 0, err
}
- if err := db.Pool.QueryRow(db.Ctx, query, qb.GetParaValues()...).Scan(&total); err != nil {
+ if err := tx.QueryRow(ctx, query, qb.GetParaValues()...).Scan(&total); err != nil {
return nil, 0, err
}
return logs, total, nil
}
func SetDebug(state bool) {
- access_mx.Lock()
- defer access_mx.Unlock()
-
- debug = state
+ debug.Store(state)
}
func SetOutputCli(state bool) {
- access_mx.Lock()
- defer access_mx.Unlock()
-
- outputCli = state
+ outputCli.Store(state)
}
func SetLogLevel(context string, level int) {
access_mx.Lock()
@@ -143,30 +137,32 @@ func SetLogLevel(context string, level int) {
}
func SetNodeId(id uuid.UUID) {
access_mx.Lock()
- defer access_mx.Unlock()
-
nodeId.Bytes = id
nodeId.Valid = true
+ access_mx.Unlock()
}
func Info(context string, message string) {
- write(3, context, message, nil)
+ go write(3, context, message, nil)
}
func Warning(context string, message string, err error) {
- write(2, context, message, err)
+ go write(2, context, message, err)
}
func Error(context string, message string, err error) {
- write(1, context, message, err)
+ go write(1, context, message, err)
}
-func write(level int, context string, message string, err error) {
+func write(level int, logContext string, message string, err error) {
+ access_mx.RLock()
+ nodeIdLocal := nodeId
+ levelActive, exists := contextLevel[logContext]
+ access_mx.RUnlock()
- levelActive, exists := contextLevel[context]
if !exists {
return
}
- if !debug && level > levelActive {
+ if !debug.Load() && level > levelActive {
return
}
@@ -180,8 +176,8 @@ func write(level int, context string, message string, err error) {
}
// log to CLI if available
- if outputCli {
- fmt.Printf("%s %s %s\n", tools.GetTimeSql(), context, message)
+ if outputCli.Load() {
+ fmt.Printf("%s %s %s\n", tools.GetTimeSql(), logContext, message)
}
// log to database if available
@@ -193,13 +189,16 @@ func write(level int, context string, message string, err error) {
message = message[:10000]
}
- if _, err := db.Pool.Exec(db.Ctx, `
+ ctx, ctxCanc := context.WithTimeout(context.Background(), db.CtxDefTimeoutLogWrite)
+ defer ctxCanc()
+
+ if _, err := db.Pool.Exec(ctx, `
INSERT INTO instance.log (level, context, message, date_milli, node_id)
VALUES ($1,$2,$3,$4,$5)
- `, level, context, message, tools.GetTimeUnixMilli(), nodeId); err != nil {
+ `, level, logContext, message, tools.GetTimeUnixMilli(), nodeIdLocal); err != nil {
// if database logging fails, output error to CLI if available
- if outputCli {
+ if outputCli.Load() {
fmt.Printf("failed to write log to DB, error: %v\n", err)
}
}
diff --git a/login/login.go b/login/login.go
index b24ab852..7bf7cbab 100644
--- a/login/login.go
+++ b/login/login.go
@@ -1,16 +1,20 @@
package login
import (
+ "context"
"errors"
"fmt"
"math/rand"
"r3/cache"
"r3/db"
"r3/handler"
+ "r3/log"
+ "r3/login/login_meta"
+ "r3/login/login_setting"
"r3/schema"
- "r3/setting"
"r3/tools"
"r3/types"
+ "slices"
"strconv"
"strings"
@@ -20,14 +24,48 @@ import (
)
// delete one login
-func Del_tx(tx pgx.Tx, id int64) error {
- _, err := tx.Exec(db.Ctx, `DELETE FROM instance.login WHERE id = $1`, id)
+func Del_tx(ctx context.Context, tx pgx.Tx, id int64) error {
+ // sync deletion before deleting the record as record meta data must be retrieved one last time
+ syncLogin_tx(ctx, tx, "DELETED", id)
+
+ _, err := tx.Exec(ctx, `DELETE FROM instance.login WHERE id = $1`, id)
return err
}
+// delete all logins for LDAP connector
+func DelByLdap_tx(ctx context.Context, tx pgx.Tx, ldapId int32) error {
+
+ loginIds := make([]int64, 0)
+ rows, err := tx.Query(ctx, `
+ SELECT id
+ FROM instance.login
+ WHERE ldap_id = $1
+ `, ldapId)
+ if err != nil {
+ return err
+ }
+ defer rows.Close()
+
+ for rows.Next() {
+ var id int64
+ if err := rows.Scan(&id); err != nil {
+ return err
+ }
+ loginIds = append(loginIds, id)
+ }
+ rows.Close()
+
+ for _, id := range loginIds {
+ if err := Del_tx(ctx, tx, id); err != nil {
+ return err
+ }
+ }
+ return nil
+}
+
// get logins with meta data and total count
-func Get(byId int64, byString string, limit int, offset int,
- recordRequests []types.LoginAdminRecordGet) ([]types.LoginAdmin, int, error) {
+func Get_tx(ctx context.Context, tx pgx.Tx, byId int64, byString string, orderBy string, orderAsc bool, limit int, offset int,
+ meta bool, roles bool, recordRequests []types.LoginAdminRecordGet) ([]types.LoginAdmin, int, error) {
cache.Schema_mx.RLock()
defer cache.Schema_mx.RUnlock()
@@ -36,10 +74,10 @@ func Get(byId int64, byString string, limit int, offset int,
var qb tools.QueryBuilder
qb.UseDollarSigns()
- qb.AddList("SELECT", []string{"l.id", "l.ldap_id", "l.ldap_key",
- "l.name", "l.admin", "l.no_auth", "l.active"})
+ qb.AddList("SELECT", []string{"l.id", "l.ldap_id", "l.ldap_key", "l.name",
+ "l.admin", "l.limited", "l.no_auth", "l.active", "l.token_expiry_hours"})
- qb.Set("FROM", "instance.login AS l")
+ qb.SetFrom("instance.login AS l")
// resolve requests for login records (records connected to logins via login attribute)
parts := make([]string, 0)
@@ -68,6 +106,7 @@ func Get(byId int64, byString string, limit int, offset int,
qb.Add("SELECT", "NULL")
}
+ // prepare filters
if byString != "" {
qb.Add("WHERE", `l.name ILIKE {NAME}`)
qb.AddPara("{NAME}", fmt.Sprintf("%%%s%%", byString))
@@ -76,16 +115,37 @@ func Get(byId int64, byString string, limit int, offset int,
qb.AddPara("{ID}", byId)
}
- qb.Add("ORDER", "l.name ASC")
- qb.Set("LIMIT", limit)
- qb.Set("OFFSET", offset)
+ // prepare order, limit and offset
+ if byId == 0 {
+ var orderAscSql = "ASC"
+ if !orderAsc {
+ orderAscSql = "DESC"
+ }
+ switch orderBy {
+ case "admin":
+ qb.Add("ORDER", fmt.Sprintf("l.admin %s, l.name ASC", orderAscSql))
+ case "ldap":
+ qb.Add("ORDER", fmt.Sprintf("l.ldap_id %s, l.name ASC", orderAscSql))
+ case "noAuth":
+ qb.Add("ORDER", fmt.Sprintf("l.no_auth %s, l.name ASC", orderAscSql))
+ case "limited":
+ qb.Add("ORDER", fmt.Sprintf("l.limited %s, l.name ASC", orderAscSql))
+ case "active":
+ qb.Add("ORDER", fmt.Sprintf("l.active %s, l.name ASC", orderAscSql))
+ default:
+ qb.Add("ORDER", fmt.Sprintf("l.name %s", orderAscSql))
+ }
+
+ qb.SetLimit(limit)
+ qb.SetOffset(offset)
+ }
query, err := qb.GetQuery()
if err != nil {
return logins, 0, err
}
- rows, err := db.Pool.Query(db.Ctx, query, qb.GetParaValues()...)
+ rows, err := tx.Query(ctx, query, qb.GetParaValues()...)
if err != nil {
return logins, 0, err
}
@@ -94,8 +154,8 @@ func Get(byId int64, byString string, limit int, offset int,
var l types.LoginAdmin
var records []string
- if err := rows.Scan(&l.Id, &l.LdapId, &l.LdapKey, &l.Name,
- &l.Admin, &l.NoAuth, &l.Active, &records); err != nil {
+ if err := rows.Scan(&l.Id, &l.LdapId, &l.LdapKey, &l.Name, &l.Admin, &l.Limited,
+ &l.NoAuth, &l.Active, &l.TokenExpiryHours, &records); err != nil {
return logins, 0, err
}
@@ -126,23 +186,36 @@ func Get(byId int64, byString string, limit int, offset int,
}
rows.Close()
+ // collect meta data
+ if meta {
+ for i, l := range logins {
+ logins[i].Meta, err = login_meta.Get_tx(ctx, tx, l.Id)
+ if err != nil {
+ return logins, 0, err
+ }
+ }
+ }
+
// collect role IDs
- for i, l := range logins {
- logins[i].RoleIds, err = getRoleIds(l.Id)
- if err != nil {
- return logins, 0, err
+ if roles {
+ for i, l := range logins {
+ logins[i].RoleIds, err = getRoleIds_tx(ctx, tx, l.Id)
+ if err != nil {
+ return logins, 0, err
+ }
}
}
- // get total count
+ // return single login if requested
if byId != 0 {
return logins, 1, nil
}
+ // get total count
var qb_cnt tools.QueryBuilder
qb_cnt.UseDollarSigns()
qb_cnt.AddList("SELECT", []string{"COUNT(*)"})
- qb_cnt.Set("FROM", "instance.login")
+ qb_cnt.SetFrom("instance.login")
if byString != "" {
qb_cnt.Add("WHERE", `name ILIKE {NAME}`)
@@ -155,7 +228,7 @@ func Get(byId int64, byString string, limit int, offset int,
}
var total int
- if err := db.Pool.QueryRow(db.Ctx, query_cnt, qb_cnt.GetParaValues()...).Scan(&total); err != nil {
+ if err := tx.QueryRow(ctx, query_cnt, qb_cnt.GetParaValues()...).Scan(&total); err != nil {
return logins, 0, err
}
return logins, total, nil
@@ -163,21 +236,22 @@ func Get(byId int64, byString string, limit int, offset int,
// set login with meta data
// returns created login ID if new login
-func Set_tx(tx pgx.Tx, id int64, loginTemplateId pgtype.Int8, ldapId pgtype.Int4,
- ldapKey pgtype.Text, name string, pass string, admin bool, noAuth bool,
- active bool, roleIds []uuid.UUID, records []types.LoginAdminRecordSet) (int64, error) {
+func Set_tx(ctx context.Context, tx pgx.Tx, id int64, loginTemplateId pgtype.Int8, ldapId pgtype.Int4,
+ ldapKey pgtype.Text, name string, pass string, admin bool, noAuth bool, active bool,
+ tokenExpiryHours pgtype.Int4, meta types.LoginMeta, roleIds []uuid.UUID, records []types.LoginAdminRecordSet) (int64, error) {
if name == "" {
return 0, errors.New("name must not be empty")
}
- name = strings.ToLower(name) // usernames are case insensitive
- isNew := id == 0 // ID 0 is new login
+ name = strings.ToLower(name) // usernames are case insensitive
+ isNew := id == 0 // ID 0 is new login
+ isLimited := len(roleIds) < 2 && !admin && !noAuth // limited logins have at most 1 role, cannot be admin or without authentication
if !isNew {
// check for existing login
var temp string
- err := tx.QueryRow(db.Ctx, `SELECT name FROM instance.login WHERE id = $1`, id).Scan(&temp)
+ err := tx.QueryRow(ctx, `SELECT name FROM instance.login WHERE id = $1`, id).Scan(&temp)
if err == pgx.ErrNoRows {
return 0, fmt.Errorf("no login with ID %d", id)
}
@@ -187,27 +261,19 @@ func Set_tx(tx pgx.Tx, id int64, loginTemplateId pgtype.Int8, ldapId pgtype.Int4
}
// generate password hash, if password was provided
- var salt, hash = pgtype.Text{}, pgtype.Text{}
- var saltKdf = tools.RandStringRunes(16)
-
- if pass != "" {
- salt.String = tools.RandStringRunes(32)
- salt.Valid = true
-
- hash.String = tools.Hash(salt.String + pass)
- hash.Valid = true
- }
+ salt, hash := GenerateSaltHash(pass)
+ saltKdf := tools.RandStringRunes(16)
if isNew {
- if err := tx.QueryRow(db.Ctx, `
+ if err := tx.QueryRow(ctx, `
INSERT INTO instance.login (
- ldap_id, ldap_key, name, salt, hash,
- salt_kdf, admin, no_auth, active
+ ldap_id, ldap_key, name, salt, hash, salt_kdf, admin,
+ no_auth, limited, active, token_expiry_hours, date_favorites
)
- VALUES ($1,$2,$3,$4,$5,$6,$7,$8,$9)
+ VALUES ($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,0)
RETURNING id
- `, ldapId, ldapKey, name, &salt, &hash, saltKdf,
- admin, noAuth, active).Scan(&id); err != nil {
+ `, ldapId, ldapKey, name, &salt, &hash, saltKdf, admin, noAuth,
+ isLimited, active, tokenExpiryHours).Scan(&id); err != nil {
return 0, err
}
@@ -215,7 +281,7 @@ func Set_tx(tx pgx.Tx, id int64, loginTemplateId pgtype.Int8, ldapId pgtype.Int4
// apply default login settings from login template
if !loginTemplateId.Valid {
// get GLOBAL template
- if err := tx.QueryRow(db.Ctx, `
+ if err := tx.QueryRow(ctx, `
SELECT id
FROM instance.login_template
WHERE name = 'GLOBAL'
@@ -223,34 +289,38 @@ func Set_tx(tx pgx.Tx, id int64, loginTemplateId pgtype.Int8, ldapId pgtype.Int4
return 0, err
}
}
- s, err := setting.Get(pgtype.Int8{}, loginTemplateId)
+ s, err := login_setting.Get_tx(ctx, tx, pgtype.Int8{}, loginTemplateId)
if err != nil {
return 0, err
}
- if err := setting.Set_tx(tx, pgtype.Int8{Int64: id, Valid: true}, pgtype.Int8{}, s, true); err != nil {
+ if err := login_setting.Set_tx(ctx, tx, pgtype.Int8{Int64: id, Valid: true}, pgtype.Int8{}, s, true); err != nil {
return 0, err
}
} else {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
UPDATE instance.login
SET ldap_id = $1, ldap_key = $2, name = $3, admin = $4,
- no_auth = $5, active = $6
- WHERE id = $7
- `, ldapId, ldapKey, name, admin, noAuth, active, id); err != nil {
+ no_auth = $5, limited = $6, active = $7, token_expiry_hours = $8
+ WHERE id = $9
+ `, ldapId, ldapKey, name, admin, noAuth, isLimited, active, tokenExpiryHours, id); err != nil {
return 0, err
}
if pass != "" {
- if _, err := tx.Exec(db.Ctx, `
- UPDATE instance.login
- SET salt = $1, hash = $2
- WHERE id = $3
- `, &salt, &hash, id); err != nil {
+ if err := SetSaltHash_tx(ctx, tx, salt, hash, id); err != nil {
return 0, err
}
}
}
+ // set meta data
+ if err := login_meta.Set_tx(ctx, tx, id, meta); err != nil {
+ return 0, err
+ }
+
+ // execute login sync
+ syncLogin_tx(ctx, tx, "UPDATED", id)
+
// set records
for _, record := range records {
@@ -262,7 +332,7 @@ func Set_tx(tx pgx.Tx, id int64, loginTemplateId pgtype.Int8, ldapId pgtype.Int4
mod := cache.ModuleIdMap[rel.ModuleId]
if !isNew {
// remove old record (first to free up unique index)
- if _, err := tx.Exec(db.Ctx, fmt.Sprintf(`
+ if _, err := tx.Exec(ctx, fmt.Sprintf(`
UPDATE "%s"."%s"
SET "%s" = null
WHERE "%s" = $1
@@ -272,7 +342,7 @@ func Set_tx(tx pgx.Tx, id int64, loginTemplateId pgtype.Int8, ldapId pgtype.Int4
}
// set new record
- if _, err := tx.Exec(db.Ctx, fmt.Sprintf(`
+ if _, err := tx.Exec(ctx, fmt.Sprintf(`
UPDATE "%s"."%s"
SET "%s" = $1
WHERE "%s" = $2
@@ -282,14 +352,24 @@ func Set_tx(tx pgx.Tx, id int64, loginTemplateId pgtype.Int8, ldapId pgtype.Int4
}
// set roles
- return id, setRoleIds_tx(tx, id, roleIds)
+ return id, setRoleIds_tx(ctx, tx, id, roleIds)
+}
+
+func SetSaltHash_tx(ctx context.Context, tx pgx.Tx, salt pgtype.Text, hash pgtype.Text, id int64) error {
+ _, err := tx.Exec(ctx, `
+ UPDATE instance.login
+ SET salt = $1, hash = $2
+ WHERE id = $3
+ `, &salt, &hash, id)
+
+ return err
}
// get login to role memberships
-func GetByRole(roleId uuid.UUID) ([]types.Login, error) {
+func GetByRole_tx(ctx context.Context, tx pgx.Tx, roleId uuid.UUID) ([]types.Login, error) {
logins := make([]types.Login, 0)
- rows, err := db.Pool.Query(db.Ctx, `
+ rows, err := tx.Query(ctx, `
SELECT id, name
FROM instance.login
WHERE active
@@ -317,14 +397,14 @@ func GetByRole(roleId uuid.UUID) ([]types.Login, error) {
// get names for public lookups for non-admins
// returns slice of up to 10 logins
-func GetNames(id int64, idsExclude []int64, byString string, noLdapAssign bool) ([]types.Login, error) {
+func GetNames_tx(ctx context.Context, tx pgx.Tx, id int64, idsExclude []int64, byString string, noLdapAssign bool) ([]types.Login, error) {
names := make([]types.Login, 0)
var qb tools.QueryBuilder
qb.UseDollarSigns()
qb.AddList("SELECT", []string{"id", "name"})
- qb.Set("FROM", "instance.login")
+ qb.SetFrom("instance.login")
if id != 0 {
qb.Add("WHERE", `id = {ID}`)
@@ -353,14 +433,14 @@ func GetNames(id int64, idsExclude []int64, byString string, noLdapAssign bool)
}
qb.Add("ORDER", "name ASC")
- qb.Set("LIMIT", 10)
+ qb.SetLimit(10)
query, err := qb.GetQuery()
if err != nil {
return names, err
}
- rows, err := db.Pool.Query(db.Ctx, query, qb.GetParaValues()...)
+ rows, err := tx.Query(ctx, query, qb.GetParaValues()...)
if err != nil {
return names, err
}
@@ -377,18 +457,18 @@ func GetNames(id int64, idsExclude []int64, byString string, noLdapAssign bool)
}
// user creatable fixed (permanent) tokens for less sensitive access permissions
-func DelTokenFixed(loginId int64, id int64) error {
- _, err := db.Pool.Exec(db.Ctx, `
+func DelTokenFixed_tx(ctx context.Context, tx pgx.Tx, loginId int64, id int64) error {
+ _, err := tx.Exec(ctx, `
DELETE FROM instance.login_token_fixed
WHERE login_id = $1
AND id = $2
`, loginId, id)
return err
}
-func GetTokensFixed(loginId int64) ([]types.LoginTokenFixed, error) {
+func GetTokensFixed_tx(ctx context.Context, tx pgx.Tx, loginId int64) ([]types.LoginTokenFixed, error) {
tokens := make([]types.LoginTokenFixed, 0)
- rows, err := db.Pool.Query(db.Ctx, `
+ rows, err := tx.Query(ctx, `
SELECT id, name, context, token, date_create
FROM instance.login_token_fixed
WHERE login_id = $1
@@ -410,11 +490,11 @@ func GetTokensFixed(loginId int64) ([]types.LoginTokenFixed, error) {
}
return tokens, nil
}
-func SetTokenFixed_tx(tx pgx.Tx, loginId int64, name string, context string) (string, error) {
+func SetTokenFixed_tx(ctx context.Context, tx pgx.Tx, loginId int64, name string, context string) (string, error) {
min, max := 32, 48
tokenFixed := tools.RandStringRunes(rand.Intn(max-min+1) + min)
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
INSERT INTO instance.login_token_fixed (login_id,token,name,context,date_create)
VALUES ($1,$2,$3,$4,$5)
`, loginId, tokenFixed, name, context, tools.GetTimeUnix()); err != nil {
@@ -425,25 +505,28 @@ func SetTokenFixed_tx(tx pgx.Tx, loginId int64, name string, context string) (st
// create new admin user
func CreateAdmin(username string, password string) error {
+ ctx, ctxCanc := context.WithTimeout(context.Background(), db.CtxDefTimeoutSysTask)
+ defer ctxCanc()
- tx, err := db.Pool.Begin(db.Ctx)
+ tx, err := db.Pool.Begin(ctx)
if err != nil {
return err
}
- defer tx.Rollback(db.Ctx)
+ defer tx.Rollback(ctx)
- if _, err := Set_tx(tx, 0, pgtype.Int8{}, pgtype.Int4{}, pgtype.Text{},
- username, password, true, false, true, []uuid.UUID{},
- []types.LoginAdminRecordSet{}); err != nil {
+ if _, err := Set_tx(ctx, tx, 0, pgtype.Int8{}, pgtype.Int4{}, pgtype.Text{},
+ username, password, true, false, true, pgtype.Int4{},
+ types.LoginMeta{NameFore: "Admin", NameSur: "User", NameDisplay: username},
+ []uuid.UUID{}, []types.LoginAdminRecordSet{}); err != nil {
return err
}
- return tx.Commit(db.Ctx)
+ return tx.Commit(ctx)
}
// reset all TOTP keys
-func ResetTotp_tx(tx pgx.Tx, loginId int64) error {
- _, err := db.Pool.Exec(db.Ctx, `
+func ResetTotp_tx(ctx context.Context, tx pgx.Tx, loginId int64) error {
+ _, err := tx.Exec(ctx, `
DELETE FROM instance.login_token_fixed
WHERE login_id = $1
AND context = 'totp'
@@ -451,69 +534,40 @@ func ResetTotp_tx(tx pgx.Tx, loginId int64) error {
return err
}
-// updates internal login backend with logins from LDAP
-// uses unique key value to update login record
-// can optionally update login roles
-// returns login ID and whether login needed to be changed
-func SetLdapLogin_tx(tx pgx.Tx, ldapId int32, ldapKey string, ldapName string,
- ldapActive bool, ldapRoleIds []uuid.UUID, loginTemplateId pgtype.Int8,
- updateRoles bool) (int64, bool, error) {
-
- // existing login details
- var id int64
- var nameEx string
- var roleIds []uuid.UUID
- var admin, active bool
-
- // get login details and check whether roles could be updated
- var rolesEqual pgtype.Bool
-
- err := tx.QueryRow(db.Ctx, `
- SELECT r1.id, r1.name, r1.admin, r1.active, r1.roles,
- (r1.roles <@ r2.roles AND r1.roles @> r2.roles) AS equal
- FROM (
- SELECT *, (
- SELECT ARRAY_AGG(lr.role_id)
- FROM instance.login_role AS lr
- WHERE lr.login_id = l.id
- ) AS roles
- FROM instance.login AS l
- WHERE l.ldap_id = $1::integer
- AND l.ldap_key = $2::text
- ) AS r1
-
- INNER JOIN (
- SELECT $3::uuid[] AS roles
- ) AS r2 ON true
- `, ldapId, ldapKey, ldapRoleIds).Scan(&id, &nameEx,
- &admin, &active, &roleIds, &rolesEqual)
-
- if err != nil && err != pgx.ErrNoRows {
- return 0, false, err
- }
-
- // create if new
- // update if name, active state or roles changed
- newLogin := err == pgx.ErrNoRows
- rolesNeedUpdate := updateRoles && !rolesEqual.Bool
-
- if newLogin || nameEx != ldapName || active != ldapActive || rolesNeedUpdate {
-
- ldapIdSql := pgtype.Int4{Int32: ldapId, Valid: true}
- ldapKeySql := pgtype.Text{String: ldapKey, Valid: true}
-
- if rolesNeedUpdate {
- roleIds = ldapRoleIds
- }
- if newLogin {
- active = true
+func GenerateSaltHash(pw string) (salt pgtype.Text, hash pgtype.Text) {
+ if pw != "" {
+ salt.String = tools.RandStringRunes(32)
+ salt.Valid = true
+ hash.String = tools.Hash(salt.String + pw)
+ hash.Valid = true
+ }
+ return salt, hash
+}
+
+// call login sync function for every module that has one to inform about changed login meta data
+func syncLogin_tx(ctx context.Context, tx pgx.Tx, action string, id int64) {
+ logContext := "server"
+ logErr := "failed to execute user sync"
+
+ if !slices.Contains([]string{"DELETED", "UPDATED"}, action) {
+ log.Error(logContext, logErr, fmt.Errorf("unknown action '%s'", action))
+ return
+ }
+
+ cache.Schema_mx.RLock()
+ for _, mod := range cache.ModuleIdMap {
+ if !mod.PgFunctionIdLoginSync.Valid {
+ continue
}
- _, err = Set_tx(tx, id, loginTemplateId, ldapIdSql, ldapKeySql,
- ldapName, "", admin, false, ldapActive, roleIds,
- []types.LoginAdminRecordSet{})
+ fnc, exists := cache.PgFunctionIdMap[mod.PgFunctionIdLoginSync.Bytes]
+ if !exists {
+ continue
+ }
- return id, true, err
+ if _, err := tx.Exec(ctx, `SELECT instance.user_sync($1,$2,$3,$4)`, mod.Name, fnc.Name, id, action); err != nil {
+ log.Error(logContext, logErr, err)
+ }
}
- return id, false, nil
+ cache.Schema_mx.RUnlock()
}
diff --git a/login/login_auth/login_auth.go b/login/login_auth/login_auth.go
index 5ea79324..c5704590 100644
--- a/login/login_auth/login_auth.go
+++ b/login/login_auth/login_auth.go
@@ -1,16 +1,20 @@
package login_auth
import (
+ "context"
"database/sql"
"encoding/base32"
"errors"
"fmt"
+ "r3/cache"
"r3/config"
"r3/db"
"r3/handler"
"r3/ldap/ldap_auth"
+ "r3/login/login_session"
"r3/tools"
"r3/types"
+ "slices"
"strings"
"time"
@@ -35,11 +39,16 @@ func authCheckSystemMode(admin bool) error {
return nil
}
-func createToken(loginId int64, username string, admin bool, noAuth bool) (string, error) {
+func createToken(loginId int64, username string, admin bool, noAuth bool, tokenExpiryHours pgtype.Int4) (string, error) {
// token is valid for multiple days, if user decides to stay logged in
now := time.Now()
- expiryHoursTime := time.Duration(int64(config.GetUint64("tokenExpiryHours")))
+ var expiryHoursTime time.Duration
+ if tokenExpiryHours.Valid {
+ expiryHoursTime = time.Duration(int64(tokenExpiryHours.Int32))
+ } else {
+ expiryHoursTime = time.Duration(int64(config.GetUint64("tokenExpiryHours")))
+ }
token, err := jwt.Sign(tokenPayload{
Payload: jwt.Payload{
@@ -55,15 +64,15 @@ func createToken(loginId int64, username string, admin bool, noAuth bool) (strin
return string(token), err
}
-// performs authentication attempt for user by using username and password
-// returns JWT, KDF salt, MFA token list (if MFA is required)
-func User(username string, password string, mfaTokenId pgtype.Int4,
+// performs authentication attempt for user by using username, password and MFA PINs (if used)
+// returns login name, JWT, KDF salt, MFA token list (if MFA is required)
+func User(ctx context.Context, username string, password string, mfaTokenId pgtype.Int4,
mfaTokenPin pgtype.Text, grantLoginId *int64, grantAdmin *bool,
- grantNoAuth *bool) (string, string, []types.LoginMfaToken, error) {
+ grantNoAuth *bool) (string, string, string, []types.LoginMfaToken, error) {
mfaTokens := make([]types.LoginMfaToken, 0)
if username == "" {
- return "", "", mfaTokens, errors.New("username not given")
+ return "", "", "", mfaTokens, errors.New("username not given")
}
// usernames are case insensitive
@@ -75,45 +84,51 @@ func User(username string, password string, mfaTokenId pgtype.Int4,
var hash sql.NullString
var saltKdf string
var admin bool
+ var limited bool
var noAuth bool
-
- err := db.Pool.QueryRow(db.Ctx, `
- SELECT id, ldap_id, salt, hash, salt_kdf, admin, no_auth
- FROM instance.login
- WHERE active
- AND name = $1
- `, username).Scan(&loginId, &ldapId, &salt, &hash, &saltKdf, &admin, &noAuth)
+ var nameDisplay pgtype.Text
+ var tokenExpiryHours pgtype.Int4
+
+ err := db.Pool.QueryRow(ctx, `
+ SELECT l.id, l.ldap_id, l.salt, l.hash, l.salt_kdf, l.admin,
+ l.no_auth, l.limited, l.token_expiry_hours, lm.name_display
+ FROM instance.login AS l
+ LEFT JOIN instance.login_meta AS lm ON lm.login_id = l.id
+ WHERE l.active
+ AND l.name = $1
+ `, username).Scan(&loginId, &ldapId, &salt, &hash, &saltKdf, &admin,
+ &noAuth, &limited, &tokenExpiryHours, &nameDisplay)
if err != nil && err != pgx.ErrNoRows {
- return "", "", mfaTokens, err
+ return "", "", "", mfaTokens, err
}
// username not found / user inactive must result in same response as authentication failed
// otherwise we can probe the system for valid user names
if err == pgx.ErrNoRows {
- return "", "", mfaTokens, errors.New(handler.ErrAuthFailed)
+ return "", "", "", mfaTokens, errors.New(handler.ErrAuthFailed)
}
if !noAuth && password == "" {
- return "", "", mfaTokens, errors.New("password not given")
+ return "", "", "", mfaTokens, errors.New("password not given")
}
if !noAuth {
if ldapId.Valid {
// authentication against LDAP
if err := ldap_auth.Check(ldapId.Int32, username, password); err != nil {
- return "", "", mfaTokens, errors.New(handler.ErrAuthFailed)
+ return "", "", "", mfaTokens, errors.New(handler.ErrAuthFailed)
}
} else {
// authentication against stored hash
if !hash.Valid || !salt.Valid || hash.String != tools.Hash(salt.String+password) {
- return "", "", mfaTokens, errors.New(handler.ErrAuthFailed)
+ return "", "", "", mfaTokens, errors.New(handler.ErrAuthFailed)
}
}
}
if err := authCheckSystemMode(admin); err != nil {
- return "", "", mfaTokens, err
+ return "", "", "", mfaTokens, err
}
// login ok
@@ -122,38 +137,38 @@ func User(username string, password string, mfaTokenId pgtype.Int4,
// validate provided MFA token
var mfaToken []byte
- if err := db.Pool.QueryRow(db.Ctx, `
+ if err := db.Pool.QueryRow(ctx, `
SELECT token
FROM instance.login_token_fixed
WHERE login_id = $1
AND id = $2
AND context = 'totp'
`, loginId, mfaTokenId.Int32).Scan(&mfaToken); err != nil {
- return "", "", mfaTokens, err
+ return "", "", "", mfaTokens, err
}
if mfaTokenPin.String != gotp.NewDefaultTOTP(base32.StdEncoding.WithPadding(
base32.NoPadding).EncodeToString(mfaToken)).Now() {
- return "", "", mfaTokens, errors.New(handler.ErrAuthFailed)
+ return "", "", "", mfaTokens, errors.New(handler.ErrAuthFailed)
}
} else {
// get available MFA tokens
- rows, err := db.Pool.Query(db.Ctx, `
+ rows, err := db.Pool.Query(ctx, `
SELECT id, name
FROM instance.login_token_fixed
WHERE login_id = $1
AND context = 'totp'
`, loginId)
if err != nil {
- return "", "", mfaTokens, err
+ return "", "", "", mfaTokens, err
}
for rows.Next() {
var m types.LoginMfaToken
if err := rows.Scan(&m.Id, &m.Name); err != nil {
- return "", "", mfaTokens, err
+ return "", "", "", mfaTokens, err
}
mfaTokens = append(mfaTokens, m)
}
@@ -161,88 +176,114 @@ func User(username string, password string, mfaTokenId pgtype.Int4,
// MFA tokens available, return with list
if len(mfaTokens) != 0 {
- return "", "", mfaTokens, nil
+ return "", "", "", mfaTokens, nil
}
}
// create session token
- token, err := createToken(loginId, username, admin, noAuth)
+ token, err := createToken(loginId, username, admin, noAuth, tokenExpiryHours)
if err != nil {
- return "", "", mfaTokens, err
+ return "", "", "", mfaTokens, err
}
// everything in order, auth successful
+ if err := cache.LoadAccessIfUnknown(loginId); err != nil {
+ return "", "", "", mfaTokens, err
+ }
+ if err := login_session.CheckConcurrentAccess(limited, loginId, admin); err != nil {
+ return "", "", "", mfaTokens, err
+ }
*grantLoginId = loginId
*grantAdmin = admin
*grantNoAuth = noAuth
- return token, saltKdf, mfaTokens, nil
+
+ if nameDisplay.Valid && nameDisplay.String != "" {
+ return nameDisplay.String, token, saltKdf, mfaTokens, nil
+ }
+ return username, token, saltKdf, mfaTokens, nil
}
// performs authentication attempt for user by using existing JWT token, signed by server
-// returns username
-func Token(token string, grantLoginId *int64, grantAdmin *bool, grantNoAuth *bool) (string, error) {
+// returns login name and language code
+func Token(ctx context.Context, token string, grantLoginId *int64, grantAdmin *bool, grantNoAuth *bool) (string, string, error) {
if token == "" {
- return "", errors.New("empty token")
+ return "", "", errors.New("empty token")
}
var tp tokenPayload
if _, err := jwt.Verify([]byte(token), config.GetTokenSecret(), &tp); err != nil {
- return "", err
+ return "", "", err
}
+ // token expiration time reached
if tools.GetTimeUnix() > tp.ExpirationTime.Unix() {
- return "", errors.New("token expired")
+ return "", "", errors.New("token expired")
}
if err := authCheckSystemMode(tp.Admin); err != nil {
- return "", err
+ return "", "", err
}
// check if login is active
- active := false
- name := ""
-
- if err := db.Pool.QueryRow(db.Ctx, `
- SELECT name, active
- FROM instance.login
- WHERE id = $1
- `, tp.LoginId).Scan(&name, &active); err != nil {
- return "", err
+ var active bool
+ var name string
+ var nameDisplay pgtype.Text
+ var languageCode string
+ var limited bool
+
+ if err := db.Pool.QueryRow(ctx, `
+ SELECT l.name, lm.name_display, l.active, l.limited, s.language_code
+ FROM instance.login AS l
+ JOIN instance.login_setting AS s ON s.login_id = l.id
+ LEFT JOIN instance.login_meta AS lm ON lm.login_id = l.id
+ WHERE l.id = $1
+ `, tp.LoginId).Scan(&name, &nameDisplay, &active, &limited, &languageCode); err != nil {
+ return "", "", err
}
if !active {
- return "", errors.New("login inactive")
+ return "", "", errors.New("login inactive")
+ }
+ if nameDisplay.Valid && nameDisplay.String != "" {
+ name = nameDisplay.String
}
// everything in order, auth successful
+ if err := cache.LoadAccessIfUnknown(tp.LoginId); err != nil {
+ return "", "", err
+ }
+ if err := login_session.CheckConcurrentAccess(limited, tp.LoginId, tp.Admin); err != nil {
+ return "", "", err
+ }
*grantLoginId = tp.LoginId
*grantAdmin = tp.Admin
*grantNoAuth = tp.NoAuth
- return name, nil
+ return name, languageCode, nil
}
// performs authentication for user by using fixed (permanent) token
// used for application access (like ICS download or fat-client access)
// cannot grant admin access
-func TokenFixed(loginId int64, context string, tokenFixed string, grantLanguageCode *string, grantToken *string) error {
+// returns login language code
+func TokenFixed(ctx context.Context, loginId int64, context string, tokenFixed string, grantToken *string) (string, error) {
if tokenFixed == "" {
- return errors.New("empty token")
+ return "", errors.New("empty token")
}
// only specific contexts may be used for token authentication
- if !tools.StringInSlice(context, []string{"client", "ics"}) {
- return fmt.Errorf("invalid token authentication context '%s'", context)
+ if !slices.Contains([]string{"client", "ics"}, context) {
+ return "", fmt.Errorf("invalid token authentication context '%s'", context)
}
// check for existing token
- languageCode := ""
- username := ""
- err := db.Pool.QueryRow(db.Ctx, `
+ var languageCode string
+ var username string
+ err := db.Pool.QueryRow(ctx, `
SELECT s.language_code, l.name
FROM instance.login_token_fixed AS t
- INNER JOIN instance.login_setting AS s ON s.login_id = t.login_id
- INNER JOIN instance.login AS l ON l.id = t.login_id
+ JOIN instance.login_setting AS s ON s.login_id = t.login_id
+ JOIN instance.login AS l ON l.id = t.login_id
WHERE t.login_id = $1
AND t.context = $2
AND t.token = $3
@@ -250,14 +291,16 @@ func TokenFixed(loginId int64, context string, tokenFixed string, grantLanguageC
`, loginId, context, tokenFixed).Scan(&languageCode, &username)
if err == pgx.ErrNoRows {
- return errors.New("login inactive")
+ return "", errors.New("login inactive or token invalid")
}
if err != nil {
- return err
+ return "", err
}
// everything in order, auth successful
- *grantLanguageCode = languageCode
- *grantToken, err = createToken(loginId, username, false, false)
- return err
+ if err := cache.LoadAccessIfUnknown(loginId); err != nil {
+ return "", err
+ }
+ *grantToken, err = createToken(loginId, username, false, false, pgtype.Int4{})
+ return languageCode, err
}
diff --git a/password/password.go b/login/login_check/login_check.go
similarity index 56%
rename from password/password.go
rename to login/login_check/login_check.go
index 146e1db9..fb454428 100644
--- a/password/password.go
+++ b/login/login_check/login_check.go
@@ -1,9 +1,9 @@
-package password
+package login_check
import (
+ "context"
"fmt"
"r3/config"
- "r3/db"
"r3/tools"
"regexp"
@@ -11,19 +11,11 @@ import (
"github.com/jackc/pgx/v5/pgtype"
)
-// change login password
-// returns success/error codes in expected problem cases
-func Set_tx(tx pgx.Tx, loginId int64, pwOld string, pwNew0 string, pwNew1 string) error {
-
- if pwOld == "" || pwNew0 == "" || pwNew0 != pwNew1 {
- return fmt.Errorf("invalid input")
- }
-
+func Password(ctx context.Context, tx pgx.Tx, loginId int64, pwOld string) error {
var salt, hash string
var ldapId pgtype.Int4
- // validate current password
- if err := tx.QueryRow(db.Ctx, `
+ if err := tx.QueryRow(ctx, `
SELECT salt, hash, ldap_id
FROM instance.login
WHERE active
@@ -35,17 +27,19 @@ func Set_tx(tx pgx.Tx, loginId int64, pwOld string, pwNew0 string, pwNew1 string
if ldapId.Valid {
return fmt.Errorf("cannot set password for LDAP login")
}
-
if hash != tools.Hash(salt+pwOld) {
return fmt.Errorf("PW_CURRENT_WRONG")
}
+ return nil
+}
- // password complexity rules
- if len(pwNew0) < int(config.GetUint64("pwLengthMin")) {
+func PasswordComplexity(pw string) error {
+
+ if len(pw) < int(config.GetUint64("pwLengthMin")) {
return fmt.Errorf("PW_TOO_SHORT")
}
if config.GetUint64("pwForceDigit") == 1 {
- match, err := regexp.MatchString(`\p{Nd}`, pwNew0)
+ match, err := regexp.MatchString(`\p{Nd}`, pw)
if err != nil {
return err
}
@@ -55,7 +49,7 @@ func Set_tx(tx pgx.Tx, loginId int64, pwOld string, pwNew0 string, pwNew1 string
}
}
if config.GetUint64("pwForceLower") == 1 {
- match, err := regexp.MatchString(`\p{Ll}`, pwNew0)
+ match, err := regexp.MatchString(`\p{Ll}`, pw)
if err != nil {
return err
}
@@ -65,7 +59,7 @@ func Set_tx(tx pgx.Tx, loginId int64, pwOld string, pwNew0 string, pwNew1 string
}
}
if config.GetUint64("pwForceUpper") == 1 {
- match, err := regexp.MatchString(`\p{Lu}`, pwNew0)
+ match, err := regexp.MatchString(`\p{Lu}`, pw)
if err != nil {
return err
}
@@ -77,7 +71,7 @@ func Set_tx(tx pgx.Tx, loginId int64, pwOld string, pwNew0 string, pwNew1 string
if config.GetUint64("pwForceSpecial") == 1 {
// Punctuation P, Mark M (accents etc.), Symbol S, Separator Z
- match, err := regexp.MatchString(`[\p{P}\p{M}\p{S}\p{Z}]`, pwNew0)
+ match, err := regexp.MatchString(`[\p{P}\p{M}\p{S}\p{Z}]`, pw)
if err != nil {
return err
}
@@ -86,17 +80,5 @@ func Set_tx(tx pgx.Tx, loginId int64, pwOld string, pwNew0 string, pwNew1 string
return fmt.Errorf("PW_REQUIRES_SPECIAL")
}
}
-
- // update password
- salt = tools.RandStringRunes(32)
- hash = tools.Hash(salt + pwNew0)
-
- if _, err := tx.Exec(db.Ctx, `
- UPDATE instance.login
- SET salt = $1, hash = $2
- WHERE id = $3
- `, salt, hash, loginId); err != nil {
- return err
- }
return nil
}
diff --git a/login/login_clientEvent/login_clientEvent.go b/login/login_clientEvent/login_clientEvent.go
new file mode 100644
index 00000000..843f75b1
--- /dev/null
+++ b/login/login_clientEvent/login_clientEvent.go
@@ -0,0 +1,74 @@
+package login_clientEvent
+
+import (
+ "context"
+ "r3/types"
+
+ "github.com/gofrs/uuid"
+ "github.com/jackc/pgx/v5"
+)
+
+func Del_tx(ctx context.Context, tx pgx.Tx, loginId int64, clientEventId uuid.UUID) error {
+ _, err := tx.Exec(ctx, `
+ DELETE FROM instance.login_client_event
+ WHERE login_id = $1
+ AND client_event_id = $2
+ `, loginId, clientEventId)
+ return err
+}
+
+func Get_tx(ctx context.Context, tx pgx.Tx, loginId int64) (map[uuid.UUID]types.LoginClientEvent, error) {
+ lceIdMap := make(map[uuid.UUID]types.LoginClientEvent)
+
+ rows, err := tx.Query(ctx, `
+ SELECT client_event_id, hotkey_modifier1, hotkey_modifier2, hotkey_char
+ FROM instance.login_client_event
+ WHERE login_id = $1
+ `, loginId)
+ if err != nil {
+ return lceIdMap, err
+ }
+ defer rows.Close()
+
+ for rows.Next() {
+ var ceId uuid.UUID
+ var lce types.LoginClientEvent
+ if err := rows.Scan(&ceId, &lce.HotkeyModifier1, &lce.HotkeyModifier2, &lce.HotkeyChar); err != nil {
+ return lceIdMap, err
+ }
+ lceIdMap[ceId] = lce
+ }
+ return lceIdMap, nil
+}
+
+func Set_tx(ctx context.Context, tx pgx.Tx, loginId int64, clientEventId uuid.UUID, lce types.LoginClientEvent) error {
+ exists := false
+
+ if err := tx.QueryRow(ctx, `
+ SELECT EXISTS(
+ SELECT client_event_id
+ FROM instance.login_client_event
+ WHERE login_id = $1
+ AND client_event_id = $2
+ )
+ `, loginId, clientEventId).Scan(&exists); err != nil {
+ return err
+ }
+
+ var err error
+ if exists {
+ _, err = tx.Exec(ctx, `
+ UPDATE instance.login_client_event
+ SET hotkey_modifier1 = $1, hotkey_modifier2 = $2, hotkey_char = $3
+ WHERE login_id = $4
+ AND client_event_id = $5
+ `, lce.HotkeyModifier1, lce.HotkeyModifier2, lce.HotkeyChar, loginId, clientEventId)
+ } else {
+ _, err = tx.Exec(ctx, `
+ INSERT INTO instance.login_client_event (
+ login_id, client_event_id, hotkey_modifier1, hotkey_modifier2, hotkey_char)
+ VALUES ($1,$2,$3,$4,$5)
+ `, loginId, clientEventId, lce.HotkeyModifier1, lce.HotkeyModifier2, lce.HotkeyChar)
+ }
+ return err
+}
diff --git a/login/login_favorites/login_favorites.go b/login/login_favorites/login_favorites.go
new file mode 100644
index 00000000..015c646d
--- /dev/null
+++ b/login/login_favorites/login_favorites.go
@@ -0,0 +1,141 @@
+package login_favorites
+
+import (
+ "context"
+ "r3/tools"
+ "r3/types"
+
+ "github.com/gofrs/uuid"
+ "github.com/jackc/pgx/v5"
+ "github.com/jackc/pgx/v5/pgtype"
+)
+
+func Add_tx(ctx context.Context, tx pgx.Tx, loginId int64, moduleId uuid.UUID, formId uuid.UUID, recordIdOpen pgtype.Int8, title pgtype.Text) (uuid.UUID, error) {
+ title = forceTitleLength(title)
+
+ id, err := uuid.NewV4()
+ if err != nil {
+ return id, err
+ }
+
+ if _, err := tx.Exec(ctx, `
+ INSERT INTO instance.login_favorite (id, login_id, module_id, form_id, record_id, title, position)
+ VALUES ($1,$2,$3,$4,$5,$6,COALESCE((
+ SELECT position + 1
+ FROM instance.login_favorite
+ WHERE login_id = $7
+ AND module_id = $8
+ ORDER BY position DESC
+ LIMIT 1
+ ),0))
+ `, id, loginId, moduleId, formId, recordIdOpen, title, loginId, moduleId); err != nil {
+ return id, err
+ }
+ return id, updateTimestamp_tx(ctx, tx, loginId)
+}
+
+func Get_tx(ctx context.Context, tx pgx.Tx, loginId int64, dateCache int64) (map[uuid.UUID][]types.LoginFavorite, int64, error) {
+ favorites := make(map[uuid.UUID][]types.LoginFavorite)
+
+ var dateCacheEx int64
+ if err := tx.QueryRow(ctx, `
+ SELECT date_favorites
+ FROM instance.login
+ WHERE id = $1
+ `, loginId).Scan(&dateCacheEx); err != nil {
+ return favorites, 0, err
+ }
+
+ if dateCache == dateCacheEx {
+ // cache valid, return empty but with same timestamp to let client know that cache is still valid
+ return favorites, dateCache, nil
+ }
+
+ // cache changed, return all
+ rows, err := tx.Query(ctx, `
+ SELECT id, module_id, form_id, record_id, title
+ FROM instance.login_favorite
+ WHERE login_id = $1
+ ORDER BY position ASC
+ `, loginId)
+ if err != nil {
+ return favorites, 0, err
+ }
+ defer rows.Close()
+
+ for rows.Next() {
+ var f types.LoginFavorite
+ var moduleId uuid.UUID
+
+ if err := rows.Scan(&f.Id, &moduleId, &f.FormId, &f.RecordId, &f.Title); err != nil {
+ return favorites, 0, err
+ }
+ _, exists := favorites[moduleId]
+ if !exists {
+ favorites[moduleId] = make([]types.LoginFavorite, 0)
+ }
+ favorites[moduleId] = append(favorites[moduleId], f)
+ }
+ return favorites, dateCacheEx, nil
+}
+
+func Set_tx(ctx context.Context, tx pgx.Tx, loginId int64, moduleIdMapFavorites map[uuid.UUID][]types.LoginFavorite) error {
+
+ var err error
+ idsKeep := make([]uuid.UUID, 0)
+ for moduleId, favorites := range moduleIdMapFavorites {
+ for position, f := range favorites {
+ f.Title = forceTitleLength(f.Title)
+
+ if f.Id == uuid.Nil {
+ f.Id, err = uuid.NewV4()
+ if err != nil {
+ return err
+ }
+
+ if _, err := tx.Exec(ctx, `
+ INSERT INTO instance.login_favorite (id, login_id, module_id, form_id, record_id, title, position)
+ VALUES ($1,$2,$3,$4,$5,$6,$7)
+ `, f.Id, loginId, moduleId, f.FormId, f.RecordId, f.Title, position); err != nil {
+ return err
+ }
+ } else {
+ if _, err := tx.Exec(ctx, `
+ UPDATE instance.login_favorite
+ SET title = $1, position = $2
+ WHERE id = $3
+ AND login_id = $4
+ `, f.Title, position, f.Id, loginId); err != nil {
+ return err
+ }
+ }
+ idsKeep = append(idsKeep, f.Id)
+ }
+ }
+
+ // delete removed favorites
+ if _, err := tx.Exec(ctx, `
+ DELETE FROM instance.login_favorite
+ WHERE id <> ALL($1)
+ AND login_id = $2
+ `, idsKeep, loginId); err != nil {
+ return err
+ }
+ return updateTimestamp_tx(ctx, tx, loginId)
+}
+
+// helpers
+func forceTitleLength(title pgtype.Text) pgtype.Text {
+ if len(title.String) > 128 {
+ title.String = title.String[0:128]
+ }
+ return title
+}
+func updateTimestamp_tx(ctx context.Context, tx pgx.Tx, loginId int64) error {
+ _, err := tx.Exec(ctx, `
+ UPDATE instance.login
+ SET date_favorites = $1
+ WHERE id = $2
+ `, tools.GetTimeUnix(), loginId)
+ return err
+}
diff --git a/login/login_keys/login_keys.go b/login/login_keys/login_keys.go
index a153b507..ceff2897 100644
--- a/login/login_keys/login_keys.go
+++ b/login/login_keys/login_keys.go
@@ -4,11 +4,10 @@ import (
"context"
"fmt"
"r3/cache"
- "r3/db"
"r3/handler"
"r3/schema"
- "r3/tools"
"r3/types"
+ "slices"
"strings"
"github.com/gofrs/uuid"
@@ -16,20 +15,21 @@ import (
"github.com/jackc/pgx/v5/pgtype"
)
-func GetPublic(ctx context.Context, relationId uuid.UUID,
+func GetPublic_tx(ctx context.Context, tx pgx.Tx, relationId uuid.UUID,
recordIds []int64, loginIds []int64) ([]types.LoginPublicKey, error) {
keys := make([]types.LoginPublicKey, 0)
loginNamesNoPublicKey := make([]string, 0)
- rows, err := db.Pool.Query(ctx, fmt.Sprintf(`
- SELECT l.id, l.name, l.key_public, ARRAY(
+ rows, err := tx.Query(ctx, fmt.Sprintf(`
+ SELECT l.id, l.name, lm.name_display, l.key_public, ARRAY(
SELECT record_id
FROM instance_e2ee."%s"
WHERE record_id = ANY($1)
AND login_id = l.id
)
- FROM instance.login AS l
+ FROM instance.login AS l
+ LEFT JOIN instance.login_meta AS lm ON lm.login_id = l.id
WHERE l.id = ANY($2)
`, schema.GetEncKeyTableName(relationId)), recordIds, loginIds)
if err != nil {
@@ -40,10 +40,11 @@ func GetPublic(ctx context.Context, relationId uuid.UUID,
for rows.Next() {
var loginId int64
var name string
+ var nameDisplay pgtype.Text
var key pgtype.Text
var recordIdsReady []int64
- if err := rows.Scan(&loginId, &name, &key, &recordIdsReady); err != nil {
+ if err := rows.Scan(&loginId, &name, &nameDisplay, &key, &recordIdsReady); err != nil {
return keys, err
}
@@ -54,13 +55,16 @@ func GetPublic(ctx context.Context, relationId uuid.UUID,
// login has no public key, error
if !key.Valid {
+ if nameDisplay.Valid && nameDisplay.String != "" {
+ name = nameDisplay.String
+ }
loginNamesNoPublicKey = append(loginNamesNoPublicKey, name)
continue
}
recordIdsMissing := make([]int64, 0)
for _, recordId := range recordIds {
- if !tools.Int64InSlice(recordId, recordIdsReady) {
+ if !slices.Contains(recordIdsReady, recordId) {
recordIdsMissing = append(recordIdsMissing, recordId)
}
}
@@ -74,18 +78,18 @@ func GetPublic(ctx context.Context, relationId uuid.UUID,
}
if len(loginNamesNoPublicKey) != 0 {
- return keys, handler.CreateErrCodeWithArgs("SEC",
- handler.ErrCodeSecNoPublicKeys,
- map[string]string{"NAMES": strings.Join(loginNamesNoPublicKey, ", ")})
+ return keys, handler.CreateErrCodeWithData(handler.ErrContextSec, handler.ErrCodeSecNoPublicKeys, struct {
+ Names string `json:"names"`
+ }{strings.Join(loginNamesNoPublicKey, ", ")})
}
return keys, nil
}
-func Reset_tx(tx pgx.Tx, loginId int64) error {
+func Reset_tx(ctx context.Context, tx pgx.Tx, loginId int64) error {
cache.Schema_mx.RLock()
defer cache.Schema_mx.RUnlock()
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
UPDATE instance.login
SET key_private_enc = NULL, key_private_enc_backup = NULL, key_public = NULL
WHERE id = $1
@@ -96,7 +100,7 @@ func Reset_tx(tx pgx.Tx, loginId int64) error {
// delete unusable data keys
for _, rel := range cache.RelationIdMap {
if rel.Encryption {
- if _, err := tx.Exec(db.Ctx, fmt.Sprintf(`
+ if _, err := tx.Exec(ctx, fmt.Sprintf(`
DELETE FROM instance_e2ee."%s"
WHERE login_id = $1
`, schema.GetEncKeyTableName(rel.Id)), loginId); err != nil {
@@ -107,10 +111,10 @@ func Reset_tx(tx pgx.Tx, loginId int64) error {
return nil
}
-func Store_tx(tx pgx.Tx, loginId int64, privateKeyEnc string,
+func Store_tx(ctx context.Context, tx pgx.Tx, loginId int64, privateKeyEnc string,
privateKeyEncBackup string, publicKey string) error {
- _, err := tx.Exec(db.Ctx, `
+ _, err := tx.Exec(ctx, `
UPDATE instance.login
SET key_private_enc = $1, key_private_enc_backup = $2, key_public = $3
WHERE id = $4
@@ -119,9 +123,9 @@ func Store_tx(tx pgx.Tx, loginId int64, privateKeyEnc string,
return err
}
-func StorePrivate_tx(tx pgx.Tx, loginId int64, privateKeyEnc string) error {
+func StorePrivate_tx(ctx context.Context, tx pgx.Tx, loginId int64, privateKeyEnc string) error {
- _, err := tx.Exec(db.Ctx, `
+ _, err := tx.Exec(ctx, `
UPDATE instance.login
SET key_private_enc = $1
WHERE id = $2
diff --git a/login/login_ldap.go b/login/login_ldap.go
new file mode 100644
index 00000000..470899cf
--- /dev/null
+++ b/login/login_ldap.go
@@ -0,0 +1,164 @@
+package login
+
+import (
+ "context"
+ "fmt"
+ "r3/cluster"
+ "r3/db"
+ "r3/log"
+ "r3/login/login_meta"
+ "r3/types"
+
+ "github.com/gofrs/uuid"
+ "github.com/jackc/pgx/v5"
+ "github.com/jackc/pgx/v5/pgtype"
+)
+
+// updates internal login backend with logins from LDAP
+// uses unique key value to update login record
+// can optionally update login roles
+func SetLdapLogin(ldap types.Ldap, ldapKey string, name string,
+ active bool, meta types.LoginMeta, roleIds []uuid.UUID) error {
+
+ // existing login details
+ var loginId int64
+ var adminEx, activeEx bool
+ var metaEx types.LoginMeta
+ var nameEx string
+ var roleIdsEx []uuid.UUID
+
+ // get login details and check whether roles could be updated
+ var rolesEqual pgtype.Bool
+
+ ctx, ctxCanc := context.WithTimeout(context.Background(), db.CtxDefTimeoutSysTask)
+ defer ctxCanc()
+
+ tx, err := db.Pool.Begin(ctx)
+ if err != nil {
+ return err
+ }
+ defer tx.Rollback(ctx)
+
+ err = tx.QueryRow(ctx, `
+ SELECT r1.id, r1.name, r1.admin, r1.active, r1.roles,
+ (r1.roles <@ r2.roles AND r1.roles @> r2.roles) AS equal
+ FROM (
+ SELECT *, (
+ SELECT ARRAY_AGG(lr.role_id)
+ FROM instance.login_role AS lr
+ WHERE lr.login_id = l.id
+ ) AS roles
+ FROM instance.login AS l
+ WHERE l.ldap_id = $1::integer
+ AND l.ldap_key = $2::text
+ ) AS r1
+
+ INNER JOIN (
+ SELECT $3::uuid[] AS roles
+ ) AS r2 ON true
+ `, ldap.Id, ldapKey, roleIds).Scan(&loginId, &nameEx,
+ &adminEx, &activeEx, &roleIdsEx, &rolesEqual)
+
+ if err != nil && err != pgx.ErrNoRows {
+ return err
+ }
+
+ newLogin := err == pgx.ErrNoRows
+ rolesBothEmpty := len(roleIdsEx) == 0 && len(roleIds) == 0
+ rolesChanged := ldap.AssignRoles && !rolesEqual.Bool && !rolesBothEmpty
+
+ // apply changed meta data from LDAP attributes, if they are defined
+ var metaChanged bool = false
+ if newLogin {
+ metaEx = meta
+ } else {
+ metaEx, err = login_meta.Get_tx(ctx, tx, loginId)
+ if err != nil {
+ return err
+ }
+ if ldap.LoginMetaAttributes.Department != "" && meta.Department != metaEx.Department {
+ metaEx.Department = meta.Department
+ metaChanged = true
+ }
+ if ldap.LoginMetaAttributes.Email != "" && meta.Email != metaEx.Email {
+ metaEx.Email = meta.Email
+ metaChanged = true
+ }
+ if ldap.LoginMetaAttributes.Location != "" && meta.Location != metaEx.Location {
+ metaEx.Location = meta.Location
+ metaChanged = true
+ }
+ if ldap.LoginMetaAttributes.NameDisplay != "" && meta.NameDisplay != metaEx.NameDisplay {
+ metaEx.NameDisplay = meta.NameDisplay
+ metaChanged = true
+ }
+ if ldap.LoginMetaAttributes.NameFore != "" && meta.NameFore != metaEx.NameFore {
+ metaEx.NameFore = meta.NameFore
+ metaChanged = true
+ }
+ if ldap.LoginMetaAttributes.NameSur != "" && meta.NameSur != metaEx.NameSur {
+ metaEx.NameSur = meta.NameSur
+ metaChanged = true
+ }
+ if ldap.LoginMetaAttributes.Notes != "" && meta.Notes != metaEx.Notes {
+ metaEx.Notes = meta.Notes
+ metaChanged = true
+ }
+ if ldap.LoginMetaAttributes.Organization != "" && meta.Organization != metaEx.Organization {
+ metaEx.Organization = meta.Organization
+ metaChanged = true
+ }
+ if ldap.LoginMetaAttributes.PhoneFax != "" && meta.PhoneFax != metaEx.PhoneFax {
+ metaEx.PhoneFax = meta.PhoneFax
+ metaChanged = true
+ }
+ if ldap.LoginMetaAttributes.PhoneLandline != "" && meta.PhoneLandline != metaEx.PhoneLandline {
+ metaEx.PhoneLandline = meta.PhoneLandline
+ metaChanged = true
+ }
+ if ldap.LoginMetaAttributes.PhoneMobile != "" && meta.PhoneMobile != metaEx.PhoneMobile {
+ metaEx.PhoneMobile = meta.PhoneMobile
+ metaChanged = true
+ }
+ }
+
+ // abort if no changes are there to apply
+ if !newLogin && nameEx == name && activeEx == active && !rolesChanged && !metaChanged {
+ return nil
+ }
+
+ // update if name, active state or roles changed
+ ldapIdSql := pgtype.Int4{Int32: ldap.Id, Valid: true}
+ ldapKeySql := pgtype.Text{String: ldapKey, Valid: true}
+
+ if rolesChanged {
+ roleIdsEx = roleIds
+ }
+
+ log.Info("ldap", fmt.Sprintf("user account '%s' is new or has been changed, updating login", name))
+
+ if _, err := Set_tx(ctx, tx, loginId, ldap.LoginTemplateId, ldapIdSql, ldapKeySql, name, "",
+ adminEx, false, active, pgtype.Int4{}, metaEx, roleIdsEx, []types.LoginAdminRecordSet{}); err != nil {
+
+ return err
+ }
+
+ // roles needed to be changed for active login, reauthorize
+ if active && rolesChanged {
+ log.Info("ldap", fmt.Sprintf("user account '%s' received new roles, renewing access permissions", name))
+
+ if err := cluster.LoginReauthorized_tx(ctx, tx, true, loginId); err != nil {
+ log.Warning("ldap", fmt.Sprintf("could not renew access permissions for '%s'", name), err)
+ }
+ }
+
+ // login was disabled, kick
+ if !active && activeEx {
+ log.Info("ldap", fmt.Sprintf("user account '%s' is locked, kicking active sessions", name))
+
+ if err := cluster.LoginDisabled_tx(ctx, tx, true, loginId); err != nil {
+ log.Warning("ldap", fmt.Sprintf("could not kick active sessions for '%s'", name), err)
+ }
+ }
+ return tx.Commit(ctx)
+}
diff --git a/login/login_meta/login_meta.go b/login/login_meta/login_meta.go
new file mode 100644
index 00000000..d1448065
--- /dev/null
+++ b/login/login_meta/login_meta.go
@@ -0,0 +1,89 @@
+package login_meta
+
+import (
+ "context"
+ "fmt"
+ "r3/types"
+
+ "github.com/jackc/pgx/v5"
+)
+
+func GetIsNotUnique_tx(ctx context.Context, tx pgx.Tx, loginId int64, content string, value string) (bool, error) {
+ var query string
+ switch content {
+ case "email":
+ query = `SELECT EXISTS(
+ SELECT login_id
+ FROM instance.login_meta
+ WHERE login_id <> $1
+ AND email = $2
+ )`
+ case "name":
+ query = `SELECT EXISTS(
+ SELECT id
+ FROM instance.login
+ WHERE id <> $1
+ AND name = $2
+ )`
+ default:
+ return false, fmt.Errorf("login unique check is not valid for content '%s'", content)
+ }
+
+ exists := false
+ err := tx.QueryRow(ctx, query, loginId, value).Scan(&exists)
+ return exists, err
+}
+
+func Get_tx(ctx context.Context, tx pgx.Tx, id int64) (types.LoginMeta, error) {
+ var m types.LoginMeta
+
+ err := tx.QueryRow(ctx, `
+ SELECT email, department, location, name_display, name_fore, name_sur,
+ notes, organization, phone_fax, phone_landline, phone_mobile
+ FROM instance.login_meta
+ WHERE login_id = $1
+ `, id).Scan(&m.Email, &m.Department, &m.Location, &m.NameDisplay, &m.NameFore, &m.NameSur,
+ &m.Notes, &m.Organization, &m.PhoneFax, &m.PhoneLandline, &m.PhoneMobile)
+
+ if err != nil && err != pgx.ErrNoRows {
+ return m, err
+ }
+ return m, nil
+}
+
+func Set_tx(ctx context.Context, tx pgx.Tx, id int64, meta types.LoginMeta) error {
+
+ var exists bool
+ if err := tx.QueryRow(ctx, `SELECT EXISTS(SELECT login_id FROM instance.login_meta WHERE login_id = $1)`, id).Scan(&exists); err != nil {
+ return err
+ }
+
+ if !exists {
+ if _, err := tx.Exec(ctx, `
+ INSERT INTO instance.login_meta (
+ login_id, email, department, location, name_display, name_fore, name_sur,
+ notes, organization, phone_fax, phone_landline, phone_mobile
+ )
+ VALUES ($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12)
+ `, id, meta.Email, meta.Department, meta.Location, meta.NameDisplay, meta.NameFore,
+ meta.NameSur, meta.Notes, meta.Organization, meta.PhoneFax, meta.PhoneLandline,
+ meta.PhoneMobile); err != nil {
+
+ return err
+ }
+ } else {
+ if _, err := tx.Exec(ctx, `
+ UPDATE instance.login_meta
+ SET email = $1, department = $2, location = $3, name_display = $4, name_fore = $5,
+ name_sur = $6, notes = $7, organization = $8, phone_fax = $9, phone_landline = $10,
+ phone_mobile = $11
+ WHERE login_id = $12
+ `, meta.Email, meta.Department, meta.Location, meta.NameDisplay, meta.NameFore,
+ meta.NameSur, meta.Notes, meta.Organization, meta.PhoneFax, meta.PhoneLandline,
+ meta.PhoneMobile, id); err != nil {
+
+ return err
+ }
+ }
+ return nil
+}
diff --git a/login/login_options/login_options.go b/login/login_options/login_options.go
new file mode 100644
index 00000000..0ceeacef
--- /dev/null
+++ b/login/login_options/login_options.go
@@ -0,0 +1,107 @@
+package login_options
+
+import (
+ "context"
+ "r3/tools"
+ "r3/types"
+
+ "github.com/gofrs/uuid"
+ "github.com/jackc/pgx/v5"
+ "github.com/jackc/pgx/v5/pgtype"
+)
+
+func CopyToFavorite_tx(ctx context.Context, tx pgx.Tx, loginId int64, isMobile bool, srcFormId uuid.UUID, srcFavoriteId pgtype.UUID, trgFavoriteId uuid.UUID) error {
+
+ copyFromFavorite := srcFavoriteId.Valid
+ fieldIdMapOptions := make(map[uuid.UUID]string)
+ var query string
+ var args []interface{}
+
+ if copyFromFavorite {
+ query = `
+ SELECT field_id, options
+ FROM instance.login_options
+ WHERE login_id = $1
+ AND login_favorite_id = $2
+ AND is_mobile = $3
+ AND field_id IN (
+ SELECT id
+ FROM app.field
+ WHERE form_id = $4
+ )`
+ args = []interface{}{loginId, srcFavoriteId, isMobile, srcFormId}
+ } else {
+ query = `
+ SELECT field_id, options
+ FROM instance.login_options
+ WHERE login_id = $1
+ AND login_favorite_id IS NULL
+ AND is_mobile = $2
+ AND field_id IN (
+ SELECT id
+ FROM app.field
+ WHERE form_id = $3
+ )`
+ args = []interface{}{loginId, isMobile, srcFormId}
+ }
+
+ rows, err := tx.Query(ctx, query, args...)
+ if err != nil {
+ return err
+ }
+ defer rows.Close()
+
+ for rows.Next() {
+ var fieldId uuid.UUID
+ var options string
+
+ if err := rows.Scan(&fieldId, &options); err != nil {
+ return err
+ }
+ fieldIdMapOptions[fieldId] = options
+ }
+
+ for fieldId, options := range fieldIdMapOptions {
+ if err := Set_tx(ctx, tx, loginId, pgtype.UUID{Bytes: trgFavoriteId, Valid: true}, fieldId, isMobile, options); err != nil {
+ return err
+ }
+ }
+ return nil
+}
+
+func Get_tx(ctx context.Context, tx pgx.Tx, loginId int64, isMobile bool, dateCache int64) ([]types.LoginOptions, error) {
+ options := make([]types.LoginOptions, 0)
+
+ rows, err := tx.Query(ctx, `
+ SELECT login_favorite_id, field_id, options
+ FROM instance.login_options
+ WHERE login_id = $1
+ AND is_mobile = $2
+ AND date_change >= $3
+ `, loginId, isMobile, dateCache)
+ if err != nil {
+ return options, err
+ }
+ defer rows.Close()
+
+ for rows.Next() {
+ var o types.LoginOptions
+ if err := rows.Scan(&o.FavoriteId, &o.FieldId, &o.Options); err != nil {
+ return options, err
+ }
+ options = append(options, o)
+ }
+ return options, nil
+}
+
+func Set_tx(ctx context.Context, tx pgx.Tx, loginId int64, favoriteId pgtype.UUID, fieldId uuid.UUID, isMobile bool, options string) error {
+ now := tools.GetTimeUnix()
+ _, err := tx.Exec(ctx, `
+ INSERT INTO instance.login_options(login_id, login_favorite_id, field_id, is_mobile, options, date_change)
+ VALUES ($1,$2,$3,$4,$5,$6)
+ ON CONFLICT (login_id, COALESCE(login_favorite_id, '00000000-0000-0000-0000-000000000000'), field_id, is_mobile)
+ DO UPDATE SET options = $7, date_change = $8
+ `, loginId, favoriteId, fieldId, isMobile, options, now, options, now)
+
+ return err
+}
diff --git a/login/login_record.go b/login/login_record.go
index 70bd2315..85bfcab9 100644
--- a/login/login_record.go
+++ b/login/login_record.go
@@ -1,19 +1,20 @@
package login
import (
+ "context"
"fmt"
"r3/cache"
- "r3/db"
"r3/schema"
"r3/tools"
"r3/types"
"github.com/gofrs/uuid"
+ "github.com/jackc/pgx/v5"
)
// get relation records as login associate
// returns slice of up to 10 records
-func GetRecords(attributeIdLookup uuid.UUID, idsExclude []int64,
+func GetRecords_tx(ctx context.Context, tx pgx.Tx, attributeIdLookup uuid.UUID, idsExclude []int64,
byId int64, byString string) ([]types.LoginRecord, error) {
cache.Schema_mx.RLock()
@@ -33,7 +34,7 @@ func GetRecords(attributeIdLookup uuid.UUID, idsExclude []int64,
qb.AddList("SELECT", []string{fmt.Sprintf(`"%s"`, schema.PkName),
fmt.Sprintf(`"%s"`, atr.Name)})
- qb.Set("FROM", fmt.Sprintf(`"%s"."%s"`, mod.Name, rel.Name))
+ qb.SetFrom(fmt.Sprintf(`"%s"."%s"`, mod.Name, rel.Name))
if len(idsExclude) != 0 {
qb.Add("WHERE", fmt.Sprintf(`"%s" <> ALL({IDS_EXCLUDE})`, schema.PkName))
@@ -49,14 +50,14 @@ func GetRecords(attributeIdLookup uuid.UUID, idsExclude []int64,
}
qb.Add("ORDER", fmt.Sprintf(`"%s" ASC`, atr.Name))
- qb.Set("LIMIT", 10)
+ qb.SetLimit(10)
query, err := qb.GetQuery()
if err != nil {
return records, err
}
- rows, err := db.Pool.Query(db.Ctx, query, qb.GetParaValues()...)
+ rows, err := tx.Query(ctx, query, qb.GetParaValues()...)
if err != nil {
return records, err
}
diff --git a/login/login_role.go b/login/login_role.go
index b04b98a1..555afefd 100644
--- a/login/login_role.go
+++ b/login/login_role.go
@@ -1,16 +1,16 @@
package login
import (
- "r3/db"
+ "context"
"github.com/gofrs/uuid"
"github.com/jackc/pgx/v5"
)
-func getRoleIds(loginId int64) ([]uuid.UUID, error) {
+func getRoleIds_tx(ctx context.Context, tx pgx.Tx, loginId int64) ([]uuid.UUID, error) {
roleIds := make([]uuid.UUID, 0)
- rows, err := db.Pool.Query(db.Ctx, `
+ rows, err := tx.Query(ctx, `
SELECT role_id
FROM instance.login_role
WHERE login_id = $1
@@ -30,9 +30,9 @@ func getRoleIds(loginId int64) ([]uuid.UUID, error) {
return roleIds, nil
}
-func SetRoleLoginIds_tx(tx pgx.Tx, roleId uuid.UUID, loginIds []int64) error {
+func SetRoleLoginIds_tx(ctx context.Context, tx pgx.Tx, roleId uuid.UUID, loginIds []int64) error {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
DELETE FROM instance.login_role
WHERE role_id = $1
`, roleId); err != nil {
@@ -40,7 +40,7 @@ func SetRoleLoginIds_tx(tx pgx.Tx, roleId uuid.UUID, loginIds []int64) error {
}
for _, loginId := range loginIds {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
INSERT INTO instance.login_role (login_id, role_id)
VALUES ($1,$2)
`, loginId, roleId); err != nil {
@@ -50,9 +50,9 @@ func SetRoleLoginIds_tx(tx pgx.Tx, roleId uuid.UUID, loginIds []int64) error {
return nil
}
-func setRoleIds_tx(tx pgx.Tx, loginId int64, roleIds []uuid.UUID) error {
+func setRoleIds_tx(ctx context.Context, tx pgx.Tx, loginId int64, roleIds []uuid.UUID) error {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
DELETE FROM instance.login_role
WHERE login_id = $1
`, loginId); err != nil {
@@ -60,7 +60,7 @@ func setRoleIds_tx(tx pgx.Tx, loginId int64, roleIds []uuid.UUID) error {
}
for _, roleId := range roleIds {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
INSERT INTO instance.login_role (login_id, role_id)
VALUES ($1,$2)
`, loginId, roleId); err != nil {
diff --git a/login/login_session/login_session.go b/login/login_session/login_session.go
new file mode 100644
index 00000000..86051d76
--- /dev/null
+++ b/login/login_session/login_session.go
@@ -0,0 +1,269 @@
+package login_session
+
+import (
+ "context"
+ "fmt"
+ "r3/cache"
+ "r3/config"
+ "r3/db"
+ "r3/handler"
+ "r3/tools"
+ "r3/types"
+
+ "github.com/gofrs/uuid"
+ "github.com/jackc/pgx/v5"
+ "github.com/jackc/pgx/v5/pgtype"
+)
+
+func Log(id uuid.UUID, loginId int64, address string, device types.WebsocketClientDevice) error {
+ ctx, ctxCanc := context.WithTimeout(context.Background(), db.CtxDefTimeoutLogWrite)
+ defer ctxCanc()
+
+ tx, err := db.Pool.Begin(ctx)
+ if err != nil {
+ return err
+ }
+ defer tx.Rollback(ctx)
+
+ // on conflict constraint requires full name for ID column in WHERE definition
+ now := tools.GetTimeUnix()
+ if _, err := tx.Exec(ctx, `
+ INSERT INTO instance.login_session(id, login_id, node_id, address, device, date)
+ VALUES ($1,$2,$3,$4,$5,$6)
+ ON CONFLICT
+ ON CONSTRAINT login_session_pkey
+ DO UPDATE
+ SET date = $7
+ WHERE instance.login_session.id = $8
+ `, id, loginId, cache.GetNodeId(), address, types.WebsocketClientDeviceNames[device], now, now, id); err != nil {
+ return err
+ }
+ return tx.Commit(ctx)
+}
+
+func LogRemove(id uuid.UUID) error {
+ ctx, ctxCanc := context.WithTimeout(context.Background(), db.CtxDefTimeoutLogWrite)
+ defer ctxCanc()
+
+ tx, err := db.Pool.Begin(ctx)
+ if err != nil {
+ return err
+ }
+ defer tx.Rollback(ctx)
+
+ if _, err := tx.Exec(ctx, `
+ DELETE FROM instance.login_session
+ WHERE id = $1
+ `, id); err != nil {
+ return err
+ }
+ return tx.Commit(ctx)
+}
+
+func LogsGet_tx(ctx context.Context, tx pgx.Tx, byString pgtype.Text, limit int, offset int, orderBy string, orderAsc bool) (interface{}, error) {
+ type session struct {
+ LoginId int64 `json:"loginId"`
+ LoginName string `json:"loginName"`
+ LoginDepartment string `json:"loginDepartment"`
+ LoginDisplay string `json:"loginDisplay"`
+ Address string `json:"address"`
+ Admin bool `json:"admin"`
+ Limited bool `json:"limited"`
+ NoAuth bool `json:"noAuth"`
+ NodeName string `json:"nodeName"`
+ Date int64 `json:"date"`
+ Device string `json:"device"`
+ }
+
+ var total int64
+ sessions := make([]session, 0)
+
+ // process inputs
+ if byString.Valid {
+ byString.String = fmt.Sprintf("%%%s%%", byString.String)
+ }
+
+ var orderBySql = ""
+ var orderAscSql = "ASC"
+
+ switch orderBy {
+ case "address":
+ orderBySql = "ls.address"
+ case "admin":
+ orderBySql = "l.admin"
+ case "date":
+ orderBySql = "ls.date"
+ case "device":
+ orderBySql = "ls.device"
+ case "limited":
+ orderBySql = "l.limited"
+ case "loginDepartment":
+ orderBySql = "m.department"
+ case "loginDisplay":
+ orderBySql = "m.name_display"
+ case "loginName":
+ orderBySql = "l.name"
+ case "noAuth":
+ orderBySql = "l.no_auth"
+ case "nodeName":
+ orderBySql = "n.name"
+ default:
+ orderBySql = "ls.date"
+ }
+ if !orderAsc {
+ orderAscSql = "DESC"
+ }
+
+ // get session count
+ if err := tx.QueryRow(ctx, `
+ SELECT COUNT(*)
+ FROM instance.login_session AS ls
+ JOIN instance.login AS l ON l.id = ls.login_id
+ LEFT JOIN instance.login_meta AS m ON l.id = m.login_id
+ JOIN instance_cluster.node AS n ON n.id = ls.node_id
+ WHERE $1::TEXT IS NULL
+ OR (
+ COALESCE(m.name_display, '') ILIKE $1 OR
+ COALESCE(m.department, '') ILIKE $1 OR
+ l.name ILIKE $1 OR
+ n.name ILIKE $1
+ )
+ `, byString).Scan(&total); err != nil {
+ return nil, err
+ }
+
+ // get session logs
+ rows, err := tx.Query(ctx, fmt.Sprintf(`
+ SELECT ls.login_id, ls.address, ls.device, ls.date, l.admin, l.limited, l.no_auth,
+ l.name, COALESCE(m.name_display, ''), COALESCE(m.department, ''), n.name
+ FROM instance.login_session AS ls
+ JOIN instance.login AS l ON l.id = ls.login_id
+ LEFT JOIN instance.login_meta AS m ON l.id = m.login_id
+ JOIN instance_cluster.node AS n ON n.id = ls.node_id
+ WHERE $1::TEXT IS NULL
+ OR (
+ COALESCE(m.name_display, '') ILIKE $1 OR
+ COALESCE(m.department, '') ILIKE $1 OR
+ l.name ILIKE $1 OR
+ n.name ILIKE $1
+ )
+ ORDER BY %s %s
+ LIMIT $2
+ OFFSET $3
+ `, orderBySql, orderAscSql), byString, limit, offset)
+ if err != nil {
+ return nil, err
+ }
+ defer rows.Close()
+
+ for rows.Next() {
+ var s session
+ if err := rows.Scan(&s.LoginId, &s.Address, &s.Device, &s.Date, &s.Admin, &s.Limited, &s.NoAuth,
+ &s.LoginName, &s.LoginDisplay, &s.LoginDepartment, &s.NodeName); err != nil {
+
+ return nil, err
+ }
+ sessions = append(sessions, s)
+ }
+
+ return struct {
+ Total int64 `json:"total"`
+ Sessions []session `json:"sessions"`
+ }{
+ total,
+ sessions,
+ }, nil
+}
+
+func LogsRemoveForNode(ctx context.Context) error {
+ tx, err := db.Pool.Begin(ctx)
+ if err != nil {
+ return err
+ }
+ defer tx.Rollback(ctx)
+
+ if err := LogsRemoveForNode_tx(ctx, tx); err != nil {
+ return err
+ }
+ return tx.Commit(ctx)
+}
+func LogsRemoveForNode_tx(ctx context.Context, tx pgx.Tx) error {
+ _, err := tx.Exec(ctx, `
+ DELETE FROM instance.login_session
+ WHERE node_id = $1
+ `, cache.GetNodeId())
+
+ return err
+}
+
+// retrieves concurrent session count for limited or not-limited logins
+// also retrieves if the given loginId already had a session
+func logsGetConcurrentForLogin(limitedLogins bool, loginId int64) (cnt int64, existed bool, err error) {
+
+ // get count of login sessions, logged for cluster nodes checked in within the last 24h
+ // get whether current login is included in retrieved login sessions
+ err = db.Pool.QueryRow(context.Background(), `
+ SELECT COUNT(*), COALESCE($3 = ANY(ARRAY_AGG(id)), FALSE)
+ FROM instance.login
+ WHERE id IN (
+ SELECT login_id
+ FROM instance.login_session
+ WHERE node_id IN (
+ SELECT id
+ FROM instance_cluster.node
+ WHERE date_check_in > $2
+ )
+ )
+ AND limited = $1
+ `, limitedLogins, tools.GetTimeUnix()-86400, loginId).Scan(&cnt, &existed)
+
+ return cnt, existed, err
+}
+func LogsGetConcurrentCounts_tx(ctx context.Context, tx pgx.Tx) (cntFull int64, cntLimited int64, err error) {
+
+ err = tx.QueryRow(ctx, `
+ SELECT
+ COUNT(1) FILTER(WHERE limited = FALSE),
+ COUNT(1) FILTER(WHERE limited = TRUE)
+ FROM instance.login
+ WHERE id IN (
+ SELECT login_id
+ FROM instance.login_session
+ WHERE node_id IN (
+ SELECT id
+ FROM instance_cluster.node
+ WHERE date_check_in > $1
+ )
+ )
+ `, tools.GetTimeUnix()-86400).Scan(&cntFull, &cntLimited)
+
+ return cntFull, cntLimited, err
+}
+
+func CheckConcurrentAccess(limitedLogin bool, loginId int64, isAdmin bool) error {
+
+ if isAdmin {
+ // admins can always login (necessary to fix issues)
+ return nil
+ }
+ if !config.GetLicenseUsed() {
+ // no license used, logins are not limited
+ return nil
+ }
+ if !config.GetLicenseActive() {
+ // license used, but expired, block login
+ return handler.CreateErrCode(handler.ErrContextLic, handler.ErrCodeLicValidityExpired)
+ }
+
+ // license used and active, check concurrent access
+ cnt, existed, err := logsGetConcurrentForLogin(limitedLogin, loginId)
+ if err != nil {
+ return err
+ }
+
+ if !existed && cnt >= config.GetLicenseLoginCount(limitedLogin) {
+ // login did not have a session and concurrent limit has been exceeded, block login
+ return handler.CreateErrCode(handler.ErrContextLic, handler.ErrCodeLicLoginsReached)
+ }
+ return nil
+}
diff --git a/login/login_setting/login_setting.go b/login/login_setting/login_setting.go
new file mode 100644
index 00000000..99a2d48e
--- /dev/null
+++ b/login/login_setting/login_setting.go
@@ -0,0 +1,129 @@
+package login_setting
+
+import (
+ "context"
+ "errors"
+ "fmt"
+ "r3/types"
+
+ "github.com/jackc/pgx/v5"
+ "github.com/jackc/pgx/v5/pgtype"
+)
+
+func Get_tx(ctx context.Context, tx pgx.Tx, loginId pgtype.Int8, loginTemplateId pgtype.Int8) (types.Settings, error) {
+
+ var s types.Settings
+ if (loginId.Valid && loginTemplateId.Valid) || (!loginId.Valid && !loginTemplateId.Valid) {
+ return s, errors.New("settings can only be retrieved for either login or login template")
+ }
+
+ entryId := loginId.Int64
+ entryName := "login_id"
+
+ if loginTemplateId.Valid {
+ entryId = loginTemplateId.Int64
+ entryName = "login_template_id"
+ }
+
+ err := tx.QueryRow(ctx, fmt.Sprintf(`
+ SELECT language_code, date_format, sunday_first_dow, font_size,
+ borders_squared, header_captions, header_modules, spacing, dark,
+ hint_update_version, mobile_scroll_form, form_actions_align,
+ warn_unsaved, pattern, font_family, tab_remember, list_colored,
+ list_spaced, color_classic_mode, color_header, color_header_single,
+ color_menu, number_sep_decimal, number_sep_thousand, bool_as_icon,
+ shadows_inputs, ARRAY(
+ SELECT name::TEXT
+ FROM instance.login_search_dict
+ WHERE login_id = ls.login_id
+ OR login_template_id = ls.login_template_id
+ ORDER BY position ASC
+ )
+ FROM instance.login_setting AS ls
+ WHERE %s = $1
+ `, entryName), entryId).Scan(&s.LanguageCode, &s.DateFormat, &s.SundayFirstDow,
+ &s.FontSize, &s.BordersSquared, &s.HeaderCaptions, &s.HeaderModules,
+ &s.Spacing, &s.Dark, &s.HintUpdateVersion, &s.MobileScrollForm, &s.FormActionsAlign,
+ &s.WarnUnsaved, &s.Pattern, &s.FontFamily, &s.TabRemember, &s.ListColored,
+ &s.ListSpaced, &s.ColorClassicMode, &s.ColorHeader, &s.ColorHeaderSingle,
+ &s.ColorMenu, &s.NumberSepDecimal, &s.NumberSepThousand, &s.BoolAsIcon,
+ &s.ShadowsInputs, &s.SearchDictionaries)
+
+ return s, err
+}
+
+func Set_tx(ctx context.Context, tx pgx.Tx, loginId pgtype.Int8, loginTemplateId pgtype.Int8, s types.Settings, isNew bool) error {
+
+ if (loginId.Valid && loginTemplateId.Valid) || (!loginId.Valid && !loginTemplateId.Valid) {
+ return errors.New("settings can only be applied for either login or login template")
+ }
+
+ entryId := loginId.Int64
+ entryName := "login_id"
+
+ if loginTemplateId.Valid {
+ entryId = loginTemplateId.Int64
+ entryName = "login_template_id"
+ }
+
+ if isNew {
+ if _, err := tx.Exec(ctx, fmt.Sprintf(`
+ INSERT INTO instance.login_setting (%s, language_code, date_format, sunday_first_dow,
+ font_size, borders_squared, header_captions, header_modules, spacing,
+ dark, hint_update_version, mobile_scroll_form, form_actions_align, warn_unsaved,
+ pattern, font_family, tab_remember, list_colored, list_spaced, color_classic_mode,
+ color_header, color_header_single,color_menu, number_sep_decimal,
+ number_sep_thousand, bool_as_icon, shadows_inputs)
+ VALUES ($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13,$14,$15,
+ $16,$17,$18,$19,$20,$21,$22,$23,$24,$25,$26,$27)
+ `, entryName), entryId, s.LanguageCode, s.DateFormat, s.SundayFirstDow, s.FontSize,
+ s.BordersSquared, s.HeaderCaptions, s.HeaderModules, s.Spacing, s.Dark,
+ s.HintUpdateVersion, s.MobileScrollForm, s.FormActionsAlign, s.WarnUnsaved, s.Pattern,
+ s.FontFamily, s.TabRemember, s.ListColored, s.ListSpaced, s.ColorClassicMode,
+ s.ColorHeader, s.ColorHeaderSingle, s.ColorMenu, s.NumberSepDecimal,
+ s.NumberSepThousand, s.BoolAsIcon, s.ShadowsInputs); err != nil {
+
+ return err
+ }
+ } else {
+ if _, err := tx.Exec(ctx, fmt.Sprintf(`
+ UPDATE instance.login_setting
+ SET language_code = $1, date_format = $2, sunday_first_dow = $3, font_size = $4,
+ borders_squared = $5, header_captions = $6, header_modules = $7,
+ spacing = $8, dark = $9, hint_update_version = $10, mobile_scroll_form = $11,
+ form_actions_align = $12, warn_unsaved = $13, pattern = $14, font_family = $15,
+ tab_remember = $16, list_colored = $17, list_spaced = $18, color_classic_mode = $19,
+ color_header = $20, color_header_single = $21, color_menu = $22, number_sep_decimal = $23,
+ number_sep_thousand = $24, bool_as_icon = $25, shadows_inputs = $26
+ WHERE %s = $27
+ `, entryName), s.LanguageCode, s.DateFormat, s.SundayFirstDow, s.FontSize,
+ s.BordersSquared, s.HeaderCaptions, s.HeaderModules, s.Spacing, s.Dark,
+ s.HintUpdateVersion, s.MobileScrollForm, s.FormActionsAlign, s.WarnUnsaved, s.Pattern,
+ s.FontFamily, s.TabRemember, s.ListColored, s.ListSpaced, s.ColorClassicMode,
+ s.ColorHeader, s.ColorHeaderSingle, s.ColorMenu, s.NumberSepDecimal,
+ s.NumberSepThousand, s.BoolAsIcon, s.ShadowsInputs, entryId); err != nil {
+
+ return err
+ }
+ }
+
+ // update full text search dictionaries
+ if !isNew {
+ if _, err := tx.Exec(ctx, fmt.Sprintf(`
+ DELETE FROM instance.login_search_dict
+ WHERE %s = $1
+ `, entryName), entryId); err != nil {
+ return err
+ }
+ }
+
+ for i, dictName := range s.SearchDictionaries {
+ if _, err := tx.Exec(ctx, fmt.Sprintf(`
+ INSERT INTO instance.login_search_dict (%s, position, name)
+ VALUES ($1, $2, $3)
+ `, entryName), entryId, i, dictName); err != nil {
+ return err
+ }
+ }
+ return nil
+}
diff --git a/login/login_template/login_template.go b/login/login_template/login_template.go
index 1cffde9f..8edae00d 100644
--- a/login/login_template/login_template.go
+++ b/login/login_template/login_template.go
@@ -1,17 +1,17 @@
package login_template
import (
+ "context"
"fmt"
- "r3/db"
- "r3/setting"
+ "r3/login/login_setting"
"r3/types"
"github.com/jackc/pgx/v5"
"github.com/jackc/pgx/v5/pgtype"
)
-func Del_tx(tx pgx.Tx, id int64) error {
- _, err := tx.Exec(db.Ctx, `
+func Del_tx(ctx context.Context, tx pgx.Tx, id int64) error {
+ _, err := tx.Exec(ctx, `
DELETE FROM instance.login_template
WHERE id = $1
AND name <> 'GLOBAL' -- protect global default
@@ -19,7 +19,7 @@ func Del_tx(tx pgx.Tx, id int64) error {
return err
}
-func Get(byId int64) ([]types.LoginTemplateAdmin, error) {
+func Get_tx(ctx context.Context, tx pgx.Tx, byId int64) ([]types.LoginTemplateAdmin, error) {
templates := make([]types.LoginTemplateAdmin, 0)
sqlParams := make([]interface{}, 0)
@@ -29,7 +29,7 @@ func Get(byId int64) ([]types.LoginTemplateAdmin, error) {
sqlWhere = "WHERE id = $1"
}
- rows, err := db.Pool.Query(db.Ctx, fmt.Sprintf(`
+ rows, err := tx.Query(ctx, fmt.Sprintf(`
SELECT id, name, comment
FROM instance.login_template
%s
@@ -49,7 +49,8 @@ func Get(byId int64) ([]types.LoginTemplateAdmin, error) {
rows.Close()
for i, _ := range templates {
- templates[i].Settings, err = setting.Get(
+ templates[i].Settings, err = login_setting.Get_tx(
+ ctx, tx,
pgtype.Int8{},
pgtype.Int8{Int64: templates[i].Id, Valid: true})
@@ -60,11 +61,11 @@ func Get(byId int64) ([]types.LoginTemplateAdmin, error) {
return templates, nil
}
-func Set_tx(tx pgx.Tx, t types.LoginTemplateAdmin) (int64, error) {
+func Set_tx(ctx context.Context, tx pgx.Tx, t types.LoginTemplateAdmin) (int64, error) {
isNew := t.Id == 0
if isNew {
- if err := tx.QueryRow(db.Ctx, `
+ if err := tx.QueryRow(ctx, `
INSERT INTO instance.login_template (name, comment)
VALUES ($1,$2)
RETURNING id
@@ -72,7 +73,7 @@ func Set_tx(tx pgx.Tx, t types.LoginTemplateAdmin) (int64, error) {
return t.Id, err
}
} else {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
UPDATE instance.login_template
SET name = $1, comment = $2
WHERE id = $3
@@ -82,7 +83,7 @@ func Set_tx(tx pgx.Tx, t types.LoginTemplateAdmin) (int64, error) {
}
}
- return t.Id, setting.Set_tx(tx,
+ return t.Id, login_setting.Set_tx(ctx, tx,
pgtype.Int8{},
pgtype.Int8{Int64: t.Id, Valid: true},
t.Settings, isNew)
diff --git a/login/login_widget/login_widget.go b/login/login_widget/login_widget.go
new file mode 100644
index 00000000..2911b4ac
--- /dev/null
+++ b/login/login_widget/login_widget.go
@@ -0,0 +1,95 @@
+package login_widget
+
+import (
+ "context"
+ "r3/types"
+
+ "github.com/gofrs/uuid"
+ "github.com/jackc/pgx/v5"
+ "github.com/jackc/pgx/v5/pgtype"
+)
+
+func Get_tx(ctx context.Context, tx pgx.Tx, loginId int64) ([]types.LoginWidgetGroup, error) {
+ groups := make([]types.LoginWidgetGroup, 0)
+
+ rows, err := tx.Query(ctx, `
+ SELECT g.id, g.title, w.widget_id, w.module_id, w.content
+ FROM instance.login_widget_group AS g
+ LEFT JOIN instance.login_widget_group_item AS w ON w.login_widget_group_id = g.id
+ WHERE g.login_id = $1
+ ORDER BY g.position ASC, w.position ASC
+ `, loginId)
+ if err != nil {
+ return groups, err
+ }
+ defer rows.Close()
+
+ var groupIdLast uuid.UUID
+
+ for rows.Next() {
+ var groupId uuid.UUID
+ var content pgtype.Text
+ var g types.LoginWidgetGroup
+ var w types.LoginWidgetGroupItem
+
+ if err := rows.Scan(&groupId, &g.Title, &w.WidgetId, &w.ModuleId, &content); err != nil {
+ return groups, err
+ }
+
+ // update if same group as in last loop iteration
+ var existingGroup = groupId.String() == groupIdLast.String() && len(groups) > 0
+
+ if existingGroup {
+ g = groups[len(groups)-1]
+ } else {
+ g.Items = make([]types.LoginWidgetGroupItem, 0)
+ }
+
+ // group can without items
+ if content.Valid {
+ w.Content = content.String
+ g.Items = append(g.Items, w)
+ }
+
+ if existingGroup {
+ groups[len(groups)-1] = g
+ } else {
+ groups = append(groups, g)
+ groupIdLast = groupId
+ }
+ }
+ return groups, nil
+}
+
+func Set_tx(ctx context.Context, tx pgx.Tx, loginId int64, groups []types.LoginWidgetGroup) error {
+
+ if _, err := tx.Exec(ctx, `
+ DELETE FROM instance.login_widget_group
+ WHERE login_id = $1
+ `, loginId); err != nil {
+ return err
+ }
+
+ for posGroup, g := range groups {
+
+ var groupId uuid.UUID
+ if err := tx.QueryRow(ctx, `
+ INSERT INTO instance.login_widget_group (login_id, title, position)
+ VALUES ($1,$2,$3)
+ RETURNING id
+ `, loginId, g.Title, posGroup).Scan(&groupId); err != nil {
+ return err
+ }
+
+ for posItem, w := range g.Items {
+ if _, err := tx.Exec(ctx, `
+ INSERT INTO instance.login_widget_group_item (
+ login_widget_group_id, position, widget_id, module_id, content)
+ VALUES ($1,$2,$3,$4,$5)
+ `, groupId, posItem, w.WidgetId, w.ModuleId, w.Content); err != nil {
+ return err
+ }
+ }
+ }
+ return nil
+}
diff --git a/module_option/module_option.go b/module_option/module_option.go
deleted file mode 100644
index c6c2195d..00000000
--- a/module_option/module_option.go
+++ /dev/null
@@ -1,83 +0,0 @@
-package module_option
-
-import (
- "r3/db"
- "r3/types"
-
- "github.com/gofrs/uuid"
- "github.com/jackc/pgx/v5"
-)
-
-func Get() ([]types.ModuleOption, error) {
- options := make([]types.ModuleOption, 0)
-
- rows, err := db.Pool.Query(db.Ctx, `
- SELECT module_id, hidden, owner, position
- FROM instance.module_option
- `)
- if err != nil {
- return options, err
- }
- defer rows.Close()
-
- for rows.Next() {
- var o types.ModuleOption
-
- if err := rows.Scan(&o.Id, &o.Hidden, &o.Owner, &o.Position); err != nil {
- return options, err
- }
- options = append(options, o)
- }
- return options, nil
-}
-
-func GetHashById(moduleId uuid.UUID) (string, error) {
- var hash string
- err := db.Pool.QueryRow(db.Ctx, `
- SELECT hash
- FROM instance.module_option
- WHERE module_id = $1
- `, moduleId).Scan(&hash)
- return hash, err
-}
-
-func Set_tx(tx pgx.Tx, moduleId uuid.UUID, hidden bool, owner bool, position int) error {
- exists := false
-
- if err := tx.QueryRow(db.Ctx, `
- SELECT EXISTS(
- SELECT *
- FROM instance.module_option
- WHERE module_id = $1
- )
- `, moduleId).Scan(&exists); err != nil {
- return err
- }
-
- if !exists {
- if _, err := tx.Exec(db.Ctx, `
- INSERT INTO instance.module_option (module_id, hidden, owner, position)
- VALUES ($1,$2,$3,$4)
- `, moduleId, hidden, owner, position); err != nil {
- return err
- }
- } else {
- if _, err := tx.Exec(db.Ctx, `
- UPDATE instance.module_option
- SET hidden = $1, owner = $2, position = $3
- WHERE module_id = $4
- `, hidden, owner, position, moduleId); err != nil {
- return err
- }
- }
- return nil
-}
-
-func SetHashById_tx(tx pgx.Tx, moduleId uuid.UUID, hash string) error {
- _, err := tx.Exec(db.Ctx, `
- UPDATE instance.module_option
- SET hash = $1
- WHERE module_id = $2
- `, hash, moduleId)
- return err
-}
diff --git a/r3.go b/r3.go
index 880d9339..5be01f90 100644
--- a/r3.go
+++ b/r3.go
@@ -13,9 +13,11 @@ import (
"os"
"os/signal"
"path/filepath"
+ "r3/bruteforce"
"r3/cache"
"r3/cluster"
"r3/config"
+ "r3/data/data_image"
"r3/db"
"r3/db/embedded"
"r3/db/initialize"
@@ -35,15 +37,18 @@ import (
"r3/handler/icon_upload"
"r3/handler/ics_download"
"r3/handler/license_upload"
+ "r3/handler/manifest_download"
"r3/handler/transfer_export"
"r3/handler/transfer_import"
"r3/handler/websocket"
- "r3/image"
+ "r3/ldap"
"r3/log"
"r3/login"
+ "r3/login/login_session"
"r3/scheduler"
"r3/tools"
"strings"
+ "sync/atomic"
"syscall"
"time"
@@ -54,9 +59,10 @@ import (
var (
// overwritten by build parameters
- appName string = "REI3"
- appNameShort string = "R3"
- appVersion string = "0.1.2.3"
+ appName string = "REI3"
+ appNameShort string = "R3"
+ appVersion string = "0.1.2.3"
+ appVersionClient string = "0.1.2.3"
// start parameters
cli struct {
@@ -86,16 +92,23 @@ var (
)
type program struct {
- embeddedDbOwned bool // whether this instance has started the embedded database
+ embeddedDbOwned atomic.Bool // whether this instance has started the embedded database
logger service.Logger // logs to the operating system if called as service, otherwise to stdOut
- stopping bool
+ stopping atomic.Bool
webServer *http.Server
}
func main() {
// set configuration parameters
- config.SetAppVersion(appVersion)
+ if err := config.SetAppVersion(appVersion, "service"); err != nil {
+ fmt.Printf("failed to set app version, %v\n", err)
+ return
+ }
+ if err := config.SetAppVersion(appVersionClient, "fatClient"); err != nil {
+ fmt.Printf("failed to set app client version, %v\n", err)
+ return
+ }
config.SetAppName(appName, appNameShort)
// process configuration overwrites from command line
@@ -133,9 +146,7 @@ func main() {
}
// initialize service
- var err error
prg := &program{}
- prg.stopping = false
svc, err := service.New(prg, svcConfig)
if err != nil {
@@ -150,13 +161,11 @@ func main() {
// listen to global shutdown channel
go func() {
- select {
- case <-scheduler.OsExit:
- prg.executeAborted(svc, nil)
- }
+ <-scheduler.OsExit
+ prg.executeAborted(svc, nil)
}()
- // add shut down in case of SIGTERM
+ // add shut down in case of SIGTERM (terminal closed)
if service.Interactive() {
signal.Notify(scheduler.OsExit, syscall.SIGTERM)
}
@@ -182,7 +191,10 @@ func main() {
// apply portable mode settings if enabled
if config.File.Portable {
- cli.dynamicPort = true
+ // compatability fix: Older portable configs (<3.10) had 443 as default port
+ if config.File.Web.Port == 443 {
+ cli.dynamicPort = true
+ }
cli.http = true
cli.run = true
cli.open = true
@@ -194,8 +206,8 @@ func main() {
flag.PrintDefaults()
fmt.Printf("\n################################################################################\n")
- fmt.Printf("This is the executable of %s, the open application platform, v%s\n", appName, appVersion)
- fmt.Printf("Copyright (c) 2019-2022 Gabriel Victor Herbert\n\n")
+ fmt.Printf("This is the executable of %s, the open low-code platform, v%s\n", appName, appVersion)
+ fmt.Printf("Copyright (c) 2019-2025 Gabriel Victor Herbert\n\n")
fmt.Printf("%s can be installed as service (-install) or run from the console (-run).\n\n", appName)
fmt.Printf("When %s is running, use any modern browser to access it (port 443 by default).\n\n", appName)
fmt.Printf("For installation instructions, please refer to the included README file or visit\n")
@@ -261,8 +273,8 @@ func main() {
// main executable can be used to open the app in default browser even if its not started (-open without -run)
// used for shortcuts in start menu when installed on Windows systems with desktop experience
- // if dynamic port is used, we cannot open app without starting it (port is not known)
- if cli.open && !cli.dynamicPort {
+ // if dynamic port (0) is used, we cannot open app without starting it (port is not known)
+ if cli.open && config.File.Web.Port != 0 && !config.File.Portable {
protocol := "https"
if cli.http {
protocol = "http"
@@ -309,7 +321,7 @@ func (prg *program) execute(svc service.Service) {
// we own the embedded DB if we can successfully start it
// otherwise another instance might be running it
- prg.embeddedDbOwned = true
+ prg.embeddedDbOwned.Store(true)
}
// connect to database
@@ -321,19 +333,22 @@ func (prg *program) execute(svc service.Service) {
// check for first database start
if err := initialize.PrepareDbIfNew(); err != nil {
- prg.executeAborted(svc, fmt.Errorf("failed to initiate database on first start, %v", err))
+ prg.executeAborted(svc, fmt.Errorf("failed to initialize database on first start, %v", err))
return
}
// apply configuration from database
- if err := cluster.ConfigChanged(false, true, false); err != nil {
+ if err := config.LoadFromDb(); err != nil {
prg.executeAborted(svc, fmt.Errorf("failed to apply configuration from database, %v", err))
return
}
+ bruteforce.SetConfig()
+ config.ActivateLicense()
+ config.SetLogLevels()
- // store host details in cache (before cluster node startup)
- if err := cache.SetHostnameFromOs(); err != nil {
- prg.executeAborted(svc, fmt.Errorf("failed to load host details, %v", err))
+ // run automatic database upgrade if required
+ if err := upgrade.RunIfRequired(); err != nil {
+ prg.executeAborted(svc, fmt.Errorf("failed automatic upgrade of database, %v", err))
return
}
@@ -354,54 +369,38 @@ func (prg *program) execute(svc service.Service) {
return
}
- // run automatic database upgrade if required
- if err := upgrade.RunIfRequired(); err != nil {
- prg.executeAborted(svc, fmt.Errorf("failed automatic upgrade of database, %v", err))
- return
- }
-
- // setup cluster node with shared database
- if err := cluster.StartNode(); err != nil {
- prg.executeAborted(svc, fmt.Errorf("failed to setup cluster node, %v", err))
- return
- }
-
- // initialize module schema cache
- if err := cluster.SchemaChangedAll(false, false); err != nil {
- prg.executeAborted(svc, fmt.Errorf("failed to initialize schema cache, %v", err))
+ // store host details in cache (before cluster node startup)
+ if err := config.SetHostnameFromOs(); err != nil {
+ prg.executeAborted(svc, fmt.Errorf("failed to load host details, %v", err))
return
}
- // initialize LDAP cache
- if err := cache.LoadLdapMap(); err != nil {
- prg.executeAborted(svc, fmt.Errorf("failed to initialize LDAP cache, %v", err))
- return
- }
+ // prepare system & initalize caches once DB is ready
+ ctx, ctxCanc := context.WithTimeout(context.Background(), db.CtxDefTimeoutSysStart)
+ defer ctxCanc()
- // initialize mail account cache
- if err := cache.LoadMailAccountMap(); err != nil {
- prg.executeAborted(svc, fmt.Errorf("failed to initialize mail account cache, %v", err))
+ if err := initSystem(ctx); err != nil {
+ prg.executeAborted(svc, fmt.Errorf("failed to initalize system during startup, %v", err))
return
}
-
- // process token secret for future client authentication from database
- if err := config.ProcessTokenSecret(); err != nil {
- prg.executeAborted(svc, fmt.Errorf("failed to process token secret, %v", err))
+ if err := initCaches(ctx); err != nil {
+ prg.executeAborted(svc, fmt.Errorf("failed to initalize required caches during startup, %v", err))
return
}
-
- // set unique instance ID if empty
- if err := config.SetInstanceIdIfEmpty(); err != nil {
- prg.executeAborted(svc, fmt.Errorf("failed to set instance ID, %v", err))
+ if err := initCachesOptional(ctx); err != nil {
+ prg.executeAborted(svc, fmt.Errorf("failed to initalize optional caches during startup, %v", err))
return
}
// prepare image processing
- image.PrepareProcessing(cli.imageMagick)
+ data_image.PrepareProcessing(cli.imageMagick)
log.Info("server", fmt.Sprintf("is ready to start application (%s)", appVersion))
- // prepare web server
+ // start scheduler (must start after module cache)
+ go scheduler.Start()
+
+ // start web server
go websocket.StartBackgroundTasks()
mux := http.NewServeMux()
@@ -431,6 +430,7 @@ func (prg *program) execute(svc service.Service) {
mux.HandleFunc("/icon/upload", icon_upload.Handler)
mux.HandleFunc("/ics/download/", ics_download.Handler)
mux.HandleFunc("/license/upload", license_upload.Handler)
+ mux.HandleFunc("/manifests/", manifest_download.Handler)
mux.HandleFunc("/websocket", websocket.Handler)
mux.HandleFunc("/export/", transfer_export.Handler)
mux.HandleFunc("/import", transfer_import.Handler)
@@ -455,8 +455,8 @@ func (prg *program) execute(svc service.Service) {
}
log.Info("server", fmt.Sprintf("starting web handlers for '%s'", webServerString))
- // if dynamic port is used we can only now open the app in default browser (port is now known)
- if cli.open && cli.dynamicPort {
+ // if dynamic port (0) is used we can only now open the app in default browser (port is now known)
+ if cli.open && config.File.Web.Port != 0 {
protocol := "https"
if cli.http {
protocol = "http"
@@ -483,27 +483,130 @@ func (prg *program) execute(svc service.Service) {
prg.executeAborted(svc, err)
return
}
+
+ // PreferServerCipherSuites & CipherSuites are deprecated
+ // https://github.com/golang/go/issues/45430
prg.webServer.TLSConfig = &tls.Config{
GetCertificate: cache.GetCert,
}
+ switch config.File.Web.TlsMinVersion {
+ case "": // prior to 3.8.4, defaults to not apply min. TLS version
+ case "1.1":
+ prg.webServer.TLSConfig.MinVersion = tls.VersionTLS11
+ case "1.2":
+ prg.webServer.TLSConfig.MinVersion = tls.VersionTLS12
+ case "1.3":
+ prg.webServer.TLSConfig.MinVersion = tls.VersionTLS13
+ default:
+ log.Warning("server", "failed to apply min. TLS version",
+ fmt.Errorf("version '%s' is not supported (valid: 1.1, 1.2 or 1.3)", config.File.Web.TlsMinVersion))
+ }
if err := prg.webServer.ServeTLS(webListener, "", ""); err != nil && err != http.ErrServerClosed {
prg.executeAborted(svc, err)
}
}
}
+// init system with connected database
+func initSystem(ctx context.Context) error {
+ tx, err := db.Pool.Begin(ctx)
+ if err != nil {
+ return err
+ }
+ defer tx.Rollback(ctx)
+
+ // set unique instance ID if empty
+ if err := config.SetInstanceIdIfEmpty_tx(ctx, tx); err != nil {
+ return fmt.Errorf("failed to set instance ID, %v", err)
+ }
+
+ // process token secret for future client authentication from database
+ if err := config.ProcessTokenSecret_tx(ctx, tx); err != nil {
+ return fmt.Errorf("failed to process token secret, %v", err)
+ }
+
+ // setup cluster node with shared database
+ if err := cluster.StartNode_tx(ctx, tx); err != nil {
+ return err
+ }
+
+ // remove login sessions logs for this cluster node (in case they were not removed on shutdown)
+ if err := login_session.LogsRemoveForNode_tx(ctx, tx); err != nil {
+ return err
+ }
+
+ // temporary fix introduced in 3.10.1
+ // instances that were upgraded from 3.9 to 3.10 did not receive 'monospace' column style (because DB change was wrongly added to upgrade script '3.8->3.9' instead of '3.9->3.10')
+ // when 3.11 is released, we permanently fix this issue by addressing it in '3.10->3.11' script, in the meantime we fix it on every boot up
+ if _, err := tx.Exec(ctx, `
+ ALTER table app.column ALTER COLUMN styles TYPE TEXT[];
+ DROP TYPE app.column_style;
+ CREATE TYPE app.column_style AS ENUM ('bold', 'italic', 'alignEnd', 'alignMid', 'clipboard', 'hide', 'vertical', 'wrap', 'monospace', 'previewLarge', 'boolAtrIcon');
+ ALTER TABLE app.column ALTER COLUMN styles TYPE app.column_style[] USING styles::TEXT[]::app.column_style[];
+ `); err != nil {
+ return err
+ }
+ return tx.Commit(ctx)
+}
+
+// load required caches from database
+func initCaches(ctx context.Context) error {
+ tx, err := db.Pool.Begin(ctx)
+ if err != nil {
+ return err
+ }
+ defer tx.Rollback(ctx)
+
+ // module meta data must be loaded before module schema (informs about what modules to load)
+ if err := cache.LoadModuleIdMapMeta_tx(ctx, tx); err != nil {
+ return fmt.Errorf("failed to initialize module meta cache, %v", err)
+ }
+ if err := cache.LoadCaptionMapCustom_tx(ctx, tx); err != nil {
+ return fmt.Errorf("failed to initialize custom caption map cache, %v", err)
+ }
+ if err := cache.LoadSchema_tx(ctx, tx); err != nil {
+ return fmt.Errorf("failed to initialize schema cache, %v", err)
+ }
+ if err := cache.LoadMailAccountMap_tx(ctx, tx); err != nil {
+ return fmt.Errorf("failed to initialize mail account cache, %v", err)
+ }
+ if err := cache.LoadOauthClientMap_tx(ctx, tx); err != nil {
+ return fmt.Errorf("failed to initialize oauth client cache, %v", err)
+ }
+ if err := cache.LoadPwaDomainMap_tx(ctx, tx); err != nil {
+ return fmt.Errorf("failed to initialize PWA domain cache, %v", err)
+ }
+ if err := ldap.UpdateCache_tx(ctx, tx); err != nil {
+ return fmt.Errorf("failed to initialize LDAP cache, %v", err)
+ }
+ return tx.Commit(ctx)
+}
+
+// load optional cache from database, might fail due to missing permissions
+func initCachesOptional(ctx context.Context) error {
+ tx, err := db.Pool.Begin(ctx)
+ if err != nil {
+ return err
+ }
+ defer tx.Rollback(ctx)
+
+ if err := cache.LoadSearchDictionaries_tx(ctx, tx); err != nil {
+ log.Error("server", "failed to read/update text search dictionaries", err)
+ return tx.Rollback(ctx)
+ }
+ return tx.Commit(ctx)
+}
+
// properly shuts down application, if execution is aborted prematurely
func (prg *program) executeAborted(svc service.Service, err error) {
-
if err != nil {
prg.logger.Error(err)
}
-
- // properly shut down
if service.Interactive() {
if err := prg.Stop(svc); err != nil {
prg.logger.Error(err)
}
+ // in cases like cluster node shutdown, there is no exit signal
os.Exit(0)
} else {
if err := svc.Stop(); err != nil {
@@ -518,25 +621,29 @@ func (prg *program) Stop(svc service.Service) error {
if !service.Interactive() {
prg.logger.Info("Stopping service...")
} else {
- // keep shut down message visible for 1 second
+ // keep shut down message visible
fmt.Println("Shutting down...")
- time.Sleep(1 * time.Second)
+ time.Sleep(500 * time.Millisecond)
}
- if prg.stopping {
+ if prg.stopping.Load() {
return nil
}
- prg.stopping = true
+ prg.stopping.Store(true)
+
+ ctx, ctxCanc := context.WithTimeout(context.Background(), db.CtxDefTimeoutShutdown)
+ defer ctxCanc()
+
+ // remove login session logs for this cluster node
+ if err := login_session.LogsRemoveForNode(ctx); err != nil {
+ prg.logger.Error(err)
+ }
// stop scheduler
scheduler.Stop()
// stop web server if running
if prg.webServer != nil {
-
- ctx, cancelWeb := context.WithTimeout(context.Background(), 5*time.Second)
- defer cancelWeb()
-
if err := prg.webServer.Shutdown(ctx); err != nil {
prg.logger.Error(err)
}
@@ -545,16 +652,15 @@ func (prg *program) Stop(svc service.Service) error {
// close database connection and deregister cluster node if DB is open
if db.Pool != nil {
- if err := cluster.StopNode(); err != nil {
+ if err := cluster.StopNode(ctx); err != nil {
prg.logger.Error(err)
}
-
db.Close()
log.Info("server", "stopped database handler")
}
// stop embedded database if owned
- if prg.embeddedDbOwned {
+ if prg.embeddedDbOwned.Load() {
if err := embedded.Stop(); err != nil {
prg.logger.Error(err)
}
diff --git a/repo/repo.go b/repo/repo.go
index 2caa2c4e..369b8373 100644
--- a/repo/repo.go
+++ b/repo/repo.go
@@ -2,65 +2,66 @@ package repo
import (
"bytes"
- "crypto/tls"
"encoding/json"
"fmt"
"io"
"net/http"
"r3/config"
- "time"
)
-func getToken(url string, skipVerify bool) (string, error) {
+func getToken(baseUrl string) (string, error) {
- var req struct {
+ var req = struct {
Username string `json:"username"`
Password string `json:"password"`
+ }{
+ Username: config.GetString("repoUser"),
+ Password: config.GetString("repoPass"),
}
- req.Username = config.GetString("repoUser")
- req.Password = config.GetString("repoPass")
var res struct {
Token string `json:"token"`
}
- if err := post(url, req, &res, skipVerify); err != nil {
+ if err := httpCallPost("", fmt.Sprintf("%s/api/auth", baseUrl), req, &res); err != nil {
return "", err
}
return res.Token, nil
}
-func getHttpClient(skipVerify bool) http.Client {
-
- tlsConfig := tls.Config{
- PreferServerCipherSuites: true,
- }
- if skipVerify {
- tlsConfig.InsecureSkipVerify = true
- }
- httpTransport := &http.Transport{
- TLSHandshakeTimeout: 5 * time.Second,
- TLSClientConfig: &tlsConfig,
- }
- return http.Client{
- Timeout: time.Second * 30,
- Transport: httpTransport,
- }
+func httpCallGet(token string, url string, reqIf interface{}, resIf interface{}) error {
+ return httpCall(http.MethodGet, token, url, reqIf, resIf)
+}
+func httpCallPost(token string, url string, reqIf interface{}, resIf interface{}) error {
+ return httpCall(http.MethodPost, token, url, reqIf, resIf)
}
+func httpCall(method string, token string, url string, reqIf interface{}, resIf interface{}) error {
-func post(url string, reqIf interface{}, resIf interface{}, skipVerify bool) error {
+ if method != http.MethodGet && method != http.MethodPost {
+ return fmt.Errorf("invalid HTTP method '%s'", method)
+ }
reqJson, err := json.Marshal(reqIf)
if err != nil {
return err
}
- httpReq, err := http.NewRequest(http.MethodPost, url, bytes.NewBuffer(reqJson))
+ httpReq, err := http.NewRequest(method, url, bytes.NewBuffer(reqJson))
if err != nil {
return err
}
+
httpReq.Header.Set("User-Agent", "r3-application")
- httpClient := getHttpClient(skipVerify)
+ if token != "" {
+ httpReq.Header.Set("Authorization", fmt.Sprintf("Bearer %s", token))
+ }
+
+ skipVerify := config.GetUint64("repoSkipVerify") == 1
+ httpClient, err := config.GetHttpClient(skipVerify, 30)
+ if err != nil {
+ return err
+ }
+
httpRes, err := httpClient.Do(httpReq)
if err != nil {
return err
diff --git a/repo/repo_download.go b/repo/repo_download.go
index 98465d5a..0e3554f0 100644
--- a/repo/repo_download.go
+++ b/repo/repo_download.go
@@ -11,16 +11,16 @@ import (
"github.com/gofrs/uuid"
)
+// attribute ID of lsw_repo, module_release->file
var fileAttributeId = "b28e8f5c-ebeb-4565-941b-4d942eedc588"
func Download(fileId uuid.UUID) (string, error) {
baseUrl := config.GetString("repoUrl")
- dataAuthUrl := fmt.Sprintf("%s/data/auth", baseUrl)
skipVerify := config.GetUint64("repoSkipVerify") == 1
// get authentication token
- token, err := getToken(dataAuthUrl, skipVerify)
+ token, err := getToken(baseUrl)
if err != nil {
return "", err
}
@@ -29,7 +29,11 @@ func Download(fileId uuid.UUID) (string, error) {
fileUrl := fmt.Sprintf("%s/data/download/file.zip?attribute_id=%s&file_id=%s&token=%s",
baseUrl, fileAttributeId, fileId, token)
- httpClient := getHttpClient(skipVerify)
+ httpClient, err := config.GetHttpClient(skipVerify, 30)
+ if err != nil {
+ return "", err
+ }
+
httpRes, err := httpClient.Get(fileUrl)
if err != nil {
return "", err
diff --git a/repo/repo_feedback.go b/repo/repo_feedback.go
index ab079700..0e2e9675 100644
--- a/repo/repo_feedback.go
+++ b/repo/repo_feedback.go
@@ -1,13 +1,11 @@
package repo
import (
- "errors"
"fmt"
"r3/cache"
"r3/config"
- "r3/types"
+ "r3/handler"
- "github.com/gofrs/uuid"
"github.com/jackc/pgx/v5/pgtype"
)
@@ -17,100 +15,54 @@ func SendFeedback(isAdmin bool, moduleRelated bool, moduleId pgtype.UUID,
cache.Schema_mx.RLock()
defer cache.Schema_mx.RUnlock()
- baseUrl := config.GetString("repoUrl")
-
- dataAuthUrl := fmt.Sprintf("%s/data/auth", baseUrl)
- dataAccessUrl := fmt.Sprintf("%s/data/access", baseUrl)
-
- skipVerify := config.GetUint64("repoSkipVerify") == 1
-
releaseBuild := 0
- _, _, releaseBuildApp, _ := config.GetAppVersions()
-
if moduleId.Valid {
module, exists := cache.ModuleIdMap[moduleId.Bytes]
if !exists {
- return errors.New("unknown module")
+ return handler.ErrSchemaUnknownModule(moduleId.Bytes)
}
releaseBuild = module.ReleaseBuild
}
+ baseUrl := config.GetString("repoUrl")
+
// get authentication token
- token, err := getToken(dataAuthUrl, skipVerify)
+ token, err := getToken(baseUrl)
if err != nil {
return err
}
// send feedback
- var req struct {
- Token string `json:"token"`
- Action string `json:"action"`
- Request map[int]types.DataSet `json:"request"`
+ type feedbackRequest struct {
+ IsAdmin bool `json:"is_admin"`
+ ModuleRelated bool `json:"module_related"`
+ ModuleUuid pgtype.UUID `json:"module_uuid"`
+ FormUuid pgtype.UUID `json:"form_uuid"`
+ Mood int `json:"mood"`
+ Code int `json:"code"`
+ ReleaseBuild int `json:"release_build"`
+ ReleaseBuildApp int `json:"release_build_app"`
+ Text string `json:"text"`
+ InstanceUuid string `json:"instance_uuid"`
}
- req.Token = token
- req.Action = "set"
- req.Request = map[int]types.DataSet{
- 0: types.DataSet{
- IndexFrom: -1, // original relation
- RecordId: 0, // new record
- RelationId: uuid.FromStringOrNil("8664771d-cfee-44d7-bb8b-14ddf555a157"), // feedback
- AttributeId: uuid.Nil,
- Attributes: []types.DataSetAttribute{
- types.DataSetAttribute{
- AttributeId: uuid.FromStringOrNil("8a4a37e3-9952-4cbc-8c90-2aea780bb977"), // is_admin
- AttributeIdNm: pgtype.UUID{},
- Value: isAdmin,
- },
- types.DataSetAttribute{
- AttributeId: uuid.FromStringOrNil("256b6705-33c4-43b7-92cf-12f55190d2e2"), // module_related
- AttributeIdNm: pgtype.UUID{},
- Value: moduleRelated,
- },
- types.DataSetAttribute{
- AttributeId: uuid.FromStringOrNil("a668177b-81f1-4cad-bdc8-8ec97b8d5004"), // module_uuid
- AttributeIdNm: pgtype.UUID{},
- Value: moduleId,
- },
- types.DataSetAttribute{
- AttributeId: uuid.FromStringOrNil("88c6ceac-cdc7-4a7d-aed7-6e8ca7568b43"), // form_uuid
- AttributeIdNm: pgtype.UUID{},
- Value: formId,
- },
- types.DataSetAttribute{
- AttributeId: uuid.FromStringOrNil("7d8fa36e-c4d7-4b79-96d6-8271e17be586"), // mood
- AttributeIdNm: pgtype.UUID{},
- Value: mood,
- },
- types.DataSetAttribute{
- AttributeId: uuid.FromStringOrNil("e8a6badc-a423-433e-980f-991c2a4d9399"), // code
- AttributeIdNm: pgtype.UUID{},
- Value: code,
- },
- types.DataSetAttribute{
- AttributeId: uuid.FromStringOrNil("01490477-18c1-4aa2-85f1-ef90f173d22f"), // release_build
- AttributeIdNm: pgtype.UUID{},
- Value: releaseBuild,
- },
- types.DataSetAttribute{
- AttributeId: uuid.FromStringOrNil("e5e0fe54-38c7-4c00-8c48-2bdca0febc2b"), // release_build_app
- AttributeIdNm: pgtype.UUID{},
- Value: releaseBuildApp,
- },
- types.DataSetAttribute{
- AttributeId: uuid.FromStringOrNil("22e93eba-bbc1-4a63-9f36-deca6b74e78d"), // text
- AttributeIdNm: pgtype.UUID{},
- Value: text,
- },
- types.DataSetAttribute{
- AttributeId: uuid.FromStringOrNil("4639719a-52dc-4809-97dd-9b5c142f7203"), // instance_uuid
- AttributeIdNm: pgtype.UUID{},
- Value: config.GetString("instanceId"),
- },
- },
+ var req = struct {
+ Feedback feedbackRequest `json:"0(feedback)"`
+ }{
+ Feedback: feedbackRequest{
+ IsAdmin: isAdmin,
+ ModuleRelated: moduleRelated,
+ ModuleUuid: moduleId,
+ FormUuid: formId,
+ Mood: mood,
+ Code: code,
+ ReleaseBuild: releaseBuild,
+ ReleaseBuildApp: config.GetAppVersion().Build,
+ Text: text,
+ InstanceUuid: config.GetString("instanceId"),
},
}
- var res types.DataSetResult
- return post(dataAccessUrl, req, &res, skipVerify)
+ var res interface{}
+ return httpCallPost(token, fmt.Sprintf("%s/api/lsw_repo/feedback/v1", baseUrl), req, &res)
}
diff --git a/repo/repo_get.go b/repo/repo_get.go
index fac1da11..a3b8703c 100644
--- a/repo/repo_get.go
+++ b/repo/repo_get.go
@@ -1,16 +1,17 @@
package repo
import (
+ "context"
"fmt"
- "r3/db"
"r3/tools"
"r3/types"
"github.com/gofrs/uuid"
+ "github.com/jackc/pgx/v5"
)
// returns modules from repository and total count
-func GetModule(byString string, languageCode string, limit int,
+func GetModule_tx(ctx context.Context, tx pgx.Tx, byString string, languageCode string, limit int,
offset int, getInstalled bool, getNew bool, getInStore bool) ([]types.RepoModule, int, error) {
repoModules := make([]types.RepoModule, 0)
@@ -21,7 +22,7 @@ func GetModule(byString string, languageCode string, limit int,
"rm.change_log", "rm.author", "rm.in_store", "rm.release_build",
"rm.release_build_app", "rm.release_date", "rm.file"})
- qb.Set("FROM", "instance.repo_module AS rm")
+ qb.SetFrom("instance.repo_module AS rm")
// simple filters
if !getInstalled {
@@ -75,10 +76,10 @@ func GetModule(byString string, languageCode string, limit int,
qb.AddPara("{NAME}", fmt.Sprintf("%%%s%%", byString))
}
qb.Add("ORDER", "rm.release_date DESC")
- qb.Set("OFFSET", offset)
+ qb.SetOffset(offset)
if limit != 0 {
- qb.Set("LIMIT", limit)
+ qb.SetLimit(limit)
}
query, err := qb.GetQuery()
@@ -86,7 +87,7 @@ func GetModule(byString string, languageCode string, limit int,
return repoModules, 0, err
}
- rows, err := db.Pool.Query(db.Ctx, query, qb.GetParaValues()...)
+ rows, err := tx.Query(ctx, query, qb.GetParaValues()...)
if err != nil {
return repoModules, 0, err
}
@@ -101,11 +102,15 @@ func GetModule(byString string, languageCode string, limit int,
return repoModules, 0, err
}
- rm.LanguageCodeMeta, err = getModuleMeta(rm.ModuleId)
+ repoModules = append(repoModules, rm)
+ }
+
+ for i, rm := range repoModules {
+ rm.LanguageCodeMeta, err = getModuleMeta_tx(ctx, tx, rm.ModuleId)
if err != nil {
return repoModules, 0, err
}
- repoModules = append(repoModules, rm)
+ repoModules[i] = rm
}
// get total
@@ -125,18 +130,18 @@ func GetModule(byString string, languageCode string, limit int,
return repoModules, 0, err
}
- if err := db.Pool.QueryRow(db.Ctx, query, qb.GetParaValues()...).Scan(&total); err != nil {
+ if err := tx.QueryRow(ctx, query, qb.GetParaValues()...).Scan(&total); err != nil {
return repoModules, 0, err
}
}
return repoModules, total, nil
}
-func getModuleMeta(moduleId uuid.UUID) (map[string]types.RepoModuleMeta, error) {
+func getModuleMeta_tx(ctx context.Context, tx pgx.Tx, moduleId uuid.UUID) (map[string]types.RepoModuleMeta, error) {
metaMap := make(map[string]types.RepoModuleMeta)
- rows, err := db.Pool.Query(db.Ctx, `
+ rows, err := tx.Query(ctx, `
SELECT language_code, title, description, support_page
FROM instance.repo_module_meta
WHERE module_id_wofk = $1
diff --git a/repo/repo_update.go b/repo/repo_update.go
index 9a575658..d308a6a0 100644
--- a/repo/repo_update.go
+++ b/repo/repo_update.go
@@ -1,6 +1,8 @@
package repo
import (
+ "context"
+ "errors"
"fmt"
"r3/config"
"r3/db"
@@ -9,94 +11,60 @@ import (
"github.com/gofrs/uuid"
"github.com/jackc/pgx/v5"
+ "github.com/jackc/pgx/v5/pgtype"
)
-/* R3 repo entities
-author 49c10371-c3ee-4d42-8961-d6d8ccda7bc7
-author.name 295f5bd9-772a-41f0-aa81-530a0678e441
-
-language 820de67e-ee99-44f9-a37a-4a7d3ac7301c
-language.code 19bd7a3b-9b3d-45da-9c07-4d8f62874b35
-
-module 08dfb28b-dbb4-4b70-8231-142235516385
-module.name fbab278a-4898-4f46-a1d7-35d1a80ee3dc
-module.uuid 98bc635b-097e-4cf0-92c9-2bb97a7c2a5e
-module.in_store 0ba7005c-834b-4d2b-a967-d748f91c2bed
-module.author a72f2de6-e1ee-4432-804b-b57f44013f4c
-module.log_summary f36130a9-bfed-42dc-920f-036ffd0d35b0
-
-module_release a300afae-a8c5-4cfc-9375-d85f45c6347c
-module_release.file b28e8f5c-ebeb-4565-941b-4d942eedc588
-module_release.module 922dc949-873f-4a21-9699-8740c0491b3a
-module_release.release_build d0766fcc-7a68-490c-9c81-f542ad37109b
-module_release.release_build_app ce998cfd-a66f-423c-b82b-d2b48a21c288
-module_release.release_date 9f9b6cda-069d-405b-bbb8-c0d12bbce910
-
-module_transl_meta 12ae386b-d1d2-48b2-a60b-2d5a11c42826
-module_transl_meta.description 3cd8b8b1-3d3f-41b0-ba6c-d7ef567a686f
-module_transl_meta.language 8aa84747-8224-4f8d-baf1-2d87df374fe6
-module_transl_meta.module 1091d013-988c-442b-beff-c853e8df20a8
-module_transl_meta.support_page 4793cd87-0bc9-4797-9538-ca733007a1d1
-module_transl_meta.title 6f66272a-7713-45a8-9565-b0157939399b
-*/
-
-// update internal module repository from external data API
+// update internal module repository from external repository API
func Update() error {
+ ctx, ctxCanc := context.WithTimeout(context.Background(), db.CtxDefTimeoutSysTask)
+ defer ctxCanc()
- lastRun := config.GetUint64("repoChecked")
- thisRun := uint64(tools.GetTimeUnix())
+ tx, err := db.Pool.Begin(ctx)
+ if err != nil {
+ return err
+ }
+ defer tx.Rollback(ctx)
+ if err := Update_tx(ctx, tx); err != nil {
+ return err
+ }
+ return tx.Commit(ctx)
+}
+func Update_tx(ctx context.Context, tx pgx.Tx) error {
baseUrl := config.GetString("repoUrl")
-
- dataAuthUrl := fmt.Sprintf("%s/data/auth", baseUrl)
- dataAccessUrl := fmt.Sprintf("%s/data/access", baseUrl)
-
- skipVerify := config.GetUint64("repoSkipVerify") == 1
repoModuleMap := make(map[uuid.UUID]types.RepoModule)
// get authentication token
- token, err := getToken(dataAuthUrl, skipVerify)
+ token, err := getToken(baseUrl)
if err != nil {
return err
}
// get modules, their latest releases and translated module meta data
- if err := getModules(token, dataAccessUrl, skipVerify, repoModuleMap); err != nil {
+ if err := getModules(token, baseUrl, repoModuleMap); err != nil {
return fmt.Errorf("failed to get modules, %w", err)
}
- if err := getModuleReleases(token, dataAccessUrl, skipVerify, repoModuleMap, lastRun); err != nil {
- return fmt.Errorf("failed to get module releases, %w", err)
- }
- if err := getModuleMetas(token, dataAccessUrl, skipVerify, repoModuleMap); err != nil {
+ if err := getModuleMetas(token, baseUrl, repoModuleMap); err != nil {
return fmt.Errorf("failed to get meta info for modules, %w", err)
}
// apply changes to local module store
- tx, err := db.Pool.Begin(db.Ctx)
- if err != nil {
- return err
- }
- defer tx.Rollback(db.Ctx)
-
- if err := removeModules_tx(tx, repoModuleMap); err != nil {
+ if err := removeModules_tx(ctx, tx, repoModuleMap); err != nil {
return fmt.Errorf("failed to remove modules, %w", err)
}
- if err := addModules_tx(tx, repoModuleMap); err != nil {
+ if err := addModules_tx(ctx, tx, repoModuleMap); err != nil {
return fmt.Errorf("failed to add modules, %w", err)
}
- if err := config.SetUint64_tx(tx, "repoChecked", thisRun); err != nil {
- return err
- }
- return tx.Commit(db.Ctx)
+ return config.SetUint64_tx(ctx, tx, "repoChecked", uint64(tools.GetTimeUnix()))
}
-func addModules_tx(tx pgx.Tx, repoModuleMap map[uuid.UUID]types.RepoModule) error {
+func addModules_tx(ctx context.Context, tx pgx.Tx, repoModuleMap map[uuid.UUID]types.RepoModule) error {
for _, sm := range repoModuleMap {
// add module and release data
var exists bool
- if err := tx.QueryRow(db.Ctx, `
+ if err := tx.QueryRow(ctx, `
SELECT EXISTS (
SELECT module_id_wofk
FROM instance.repo_module
@@ -107,7 +75,7 @@ func addModules_tx(tx pgx.Tx, repoModuleMap map[uuid.UUID]types.RepoModule) erro
}
if !exists {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
INSERT INTO instance.repo_module (
module_id_wofk, name, change_log, author, in_store,
release_build, release_build_app, release_date, file
@@ -122,7 +90,7 @@ func addModules_tx(tx pgx.Tx, repoModuleMap map[uuid.UUID]types.RepoModule) erro
} else {
// if no release is set, update module data only
if sm.ReleaseBuild == 0 {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
UPDATE instance.repo_module
SET name = $1, change_log = $2, author = $3, in_store = $4
WHERE module_id_wofk = $5
@@ -132,7 +100,7 @@ func addModules_tx(tx pgx.Tx, repoModuleMap map[uuid.UUID]types.RepoModule) erro
return err
}
} else {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
UPDATE instance.repo_module
SET name = $1, change_log = $2, author = $3, in_store = $4,
release_build = $5, release_build_app = $6,
@@ -148,7 +116,7 @@ func addModules_tx(tx pgx.Tx, repoModuleMap map[uuid.UUID]types.RepoModule) erro
}
// add translated module meta
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
DELETE FROM instance.repo_module_meta
WHERE module_id_wofk = $1
`, sm.ModuleId); err != nil {
@@ -157,7 +125,7 @@ func addModules_tx(tx pgx.Tx, repoModuleMap map[uuid.UUID]types.RepoModule) erro
for languageCode, meta := range sm.LanguageCodeMeta {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
INSERT INTO instance.repo_module_meta (
module_id_wofk, language_code, title,
description, support_page
@@ -173,20 +141,20 @@ func addModules_tx(tx pgx.Tx, repoModuleMap map[uuid.UUID]types.RepoModule) erro
return nil
}
-func removeModules_tx(tx pgx.Tx, repoModuleMap map[uuid.UUID]types.RepoModule) error {
+func removeModules_tx(ctx context.Context, tx pgx.Tx, repoModuleMap map[uuid.UUID]types.RepoModule) error {
moduleIds := make([]uuid.UUID, 0)
for id, _ := range repoModuleMap {
moduleIds = append(moduleIds, id)
}
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
DELETE FROM instance.repo_module
WHERE module_id_wofk <> ALL($1)
`, moduleIds); err != nil {
return err
}
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
DELETE FROM instance.repo_module_meta
WHERE module_id_wofk <> ALL($1)
`, moduleIds); err != nil {
@@ -194,3 +162,112 @@ func removeModules_tx(tx pgx.Tx, repoModuleMap map[uuid.UUID]types.RepoModule) e
}
return nil
}
+
+func getModules(token string, baseUrl string, repoModuleMap map[uuid.UUID]types.RepoModule) error {
+
+ type moduleResponse struct {
+ Module struct {
+ Uuid uuid.UUID `json:"uuid"`
+ Name string `json:"name"`
+ InStore bool `json:"in_store"`
+ LogSummary pgtype.Text `json:"log_summary"`
+ } `json:"0(module)"`
+ Release struct {
+ ReleaseBuild int `json:"release_build"`
+ ReleaseBuildApp int `json:"release_build_app"`
+ ReleaseDate int64 `json:"release_date"`
+ File []types.DataGetValueFile `json:"file"`
+ } `json:"1(module_release)"`
+ Author struct {
+ Name string `json:"name"`
+ } `json:"2(author)"`
+ }
+
+ limit := 100
+ offset := 0
+
+ for true {
+ url := fmt.Sprintf("%s/api/lsw_repo/module/v1?limit=%d&offset=%d", baseUrl, limit, offset)
+
+ var res []moduleResponse
+ if err := httpCallGet(token, url, "", &res); err != nil {
+ return err
+ }
+
+ for _, mod := range res {
+
+ if len(mod.Release.File) != 1 {
+ return fmt.Errorf("module release does not have exactly 1 file, file count: %d",
+ len(mod.Release.File))
+ }
+
+ repoModuleMap[mod.Module.Uuid] = types.RepoModule{
+ ModuleId: mod.Module.Uuid,
+ Name: mod.Module.Name,
+ InStore: mod.Module.InStore,
+ ChangeLog: mod.Module.LogSummary,
+ ReleaseBuild: mod.Release.ReleaseBuild,
+ ReleaseBuildApp: mod.Release.ReleaseBuildApp,
+ ReleaseDate: mod.Release.ReleaseDate,
+ FileId: mod.Release.File[0].Id,
+ Author: mod.Author.Name,
+ LanguageCodeMeta: make(map[string]types.RepoModuleMeta),
+ }
+ }
+
+ if len(res) >= limit {
+ offset += limit
+ continue
+ }
+ break
+ }
+ return nil
+}
+
+func getModuleMetas(token string, baseUrl string, repoModuleMap map[uuid.UUID]types.RepoModule) error {
+
+ type moduleMetaResponse struct {
+ Meta struct {
+ Description string `json:"description"`
+ SupportPage string `json:"support_page"`
+ Title string `json:"title"`
+ } `json:"0(module_transl_meta)"`
+ Module struct {
+ Uuid uuid.UUID `json:"uuid"`
+ } `json:"1(module)"`
+ Language struct {
+ Code string `json:"code"`
+ } `json:"2(language)"`
+ }
+
+ limit := 100
+ offset := 0
+
+ for true {
+ url := fmt.Sprintf("%s/api/lsw_repo/module_meta/v1?limit=%d&offset=%d", baseUrl, limit, offset)
+
+ var res []moduleMetaResponse
+ if err := httpCallGet(token, url, "", &res); err != nil {
+ return err
+ }
+
+ for _, mod := range res {
+ if _, exists := repoModuleMap[mod.Module.Uuid]; !exists {
+ return errors.New("meta for non-existing module")
+ }
+
+ repoModuleMap[mod.Module.Uuid].LanguageCodeMeta[mod.Language.Code] = types.RepoModuleMeta{
+ Description: mod.Meta.Description,
+ SupportPage: mod.Meta.SupportPage,
+ Title: mod.Meta.Title,
+ }
+ }
+
+ if len(res) >= limit {
+ offset += limit
+ continue
+ }
+ break
+ }
+ return nil
+}
diff --git a/repo/repo_update_metas.go b/repo/repo_update_metas.go
deleted file mode 100644
index 1b1071b6..00000000
--- a/repo/repo_update_metas.go
+++ /dev/null
@@ -1,112 +0,0 @@
-package repo
-
-import (
- "errors"
- "r3/tools"
- "r3/types"
-
- "github.com/gofrs/uuid"
- "github.com/jackc/pgx/v5/pgtype"
-)
-
-func getModuleMetas(token string, url string, skipVerify bool,
- repoModuleMap map[uuid.UUID]types.RepoModule) error {
-
- var req struct {
- Token string `json:"token"`
- Action string `json:"action"`
- Request types.DataGet `json:"request"`
- }
- req.Token = token
- req.Action = "get"
-
- req.Request = types.DataGet{
- RelationId: uuid.FromStringOrNil("08dfb28b-dbb4-4b70-8231-142235516385"), // module
- Expressions: []types.DataGetExpression{
- types.DataGetExpression{ // module UUID
- AttributeId: tools.UuidStringToNullUuid("98bc635b-097e-4cf0-92c9-2bb97a7c2a5e"),
- AttributeIdNm: pgtype.UUID{},
- Aggregator: pgtype.Text{},
- Index: 0,
- },
- types.DataGetExpression{ // module meta description
- AttributeId: tools.UuidStringToNullUuid("3cd8b8b1-3d3f-41b0-ba6c-d7ef567a686f"),
- AttributeIdNm: pgtype.UUID{},
- Aggregator: pgtype.Text{},
- Index: 1,
- },
- types.DataGetExpression{ // module meta support page
- AttributeId: tools.UuidStringToNullUuid("4793cd87-0bc9-4797-9538-ca733007a1d1"),
- AttributeIdNm: pgtype.UUID{},
- Aggregator: pgtype.Text{},
- Index: 1,
- },
- types.DataGetExpression{ // module meta title
- AttributeId: tools.UuidStringToNullUuid("6f66272a-7713-45a8-9565-b0157939399b"),
- AttributeIdNm: pgtype.UUID{},
- Aggregator: pgtype.Text{},
- Index: 1,
- },
- types.DataGetExpression{ // language code
- AttributeId: tools.UuidStringToNullUuid("19bd7a3b-9b3d-45da-9c07-4d8f62874b35"),
- AttributeIdNm: pgtype.UUID{},
- Aggregator: pgtype.Text{},
- Index: 2,
- },
- },
- IndexSource: 0,
- Joins: []types.DataGetJoin{
- types.DataGetJoin{ // module translation meta via module
- AttributeId: uuid.FromStringOrNil("1091d013-988c-442b-beff-c853e8df20a8"),
- Index: 1,
- IndexFrom: 0,
- Connector: "INNER",
- },
- types.DataGetJoin{ // language via module translation meta language
- AttributeId: uuid.FromStringOrNil("8aa84747-8224-4f8d-baf1-2d87df374fe6"),
- Index: 2,
- IndexFrom: 1,
- Connector: "INNER",
- },
- },
- }
-
- var res struct {
- Count int `json:"count"`
- Rows []types.DataGetResult `json:"rows"`
- }
- if err := post(url, req, &res, skipVerify); err != nil {
- return err
- }
-
- for _, row := range res.Rows {
- if len(row.Values) != 5 {
- return errors.New("invalid value count for store module release")
- }
-
- languageCode := ""
- moduleId := uuid.UUID{}
- meta := types.RepoModuleMeta{}
-
- for i, value := range row.Values {
- switch i {
- case 0:
- moduleId = uuid.FromStringOrNil(value.(string))
-
- if _, exists := repoModuleMap[moduleId]; !exists {
- return errors.New("meta for non-existing module")
- }
- case 1:
- meta.Description = value.(string)
- case 2:
- meta.SupportPage = value.(string)
- case 3:
- meta.Title = value.(string)
- case 4:
- languageCode = value.(string)
- }
- }
- repoModuleMap[moduleId].LanguageCodeMeta[languageCode] = meta
- }
- return nil
-}
diff --git a/repo/repo_update_modules.go b/repo/repo_update_modules.go
deleted file mode 100644
index 8d4b7979..00000000
--- a/repo/repo_update_modules.go
+++ /dev/null
@@ -1,107 +0,0 @@
-package repo
-
-import (
- "errors"
- "r3/tools"
- "r3/types"
-
- "github.com/gofrs/uuid"
- "github.com/jackc/pgx/v5/pgtype"
-)
-
-func getModules(token string, url string, skipVerify bool,
- repoModuleMap map[uuid.UUID]types.RepoModule) error {
-
- var req struct {
- Token string `json:"token"`
- Action string `json:"action"`
- Request types.DataGet `json:"request"`
- }
- req.Token = token
- req.Action = "get"
-
- req.Request = types.DataGet{
- RelationId: uuid.FromStringOrNil("08dfb28b-dbb4-4b70-8231-142235516385"), // module
- Expressions: []types.DataGetExpression{
- types.DataGetExpression{ // module UUID
- AttributeId: tools.UuidStringToNullUuid("98bc635b-097e-4cf0-92c9-2bb97a7c2a5e"),
- AttributeIdNm: pgtype.UUID{},
- Aggregator: pgtype.Text{},
- Index: 0,
- },
- types.DataGetExpression{ // module name
- AttributeId: tools.UuidStringToNullUuid("fbab278a-4898-4f46-a1d7-35d1a80ee3dc"),
- AttributeIdNm: pgtype.UUID{},
- Aggregator: pgtype.Text{},
- Index: 0,
- },
- types.DataGetExpression{ // module is visible in store?
- AttributeId: tools.UuidStringToNullUuid("0ba7005c-834b-4d2b-a967-d748f91c2bed"),
- AttributeIdNm: pgtype.UUID{},
- Aggregator: pgtype.Text{},
- Index: 0,
- },
- types.DataGetExpression{ // module change log
- AttributeId: tools.UuidStringToNullUuid("f36130a9-bfed-42dc-920f-036ffd0d35b0"),
- AttributeIdNm: pgtype.UUID{},
- Aggregator: pgtype.Text{},
- Index: 0,
- },
- types.DataGetExpression{ // author name
- AttributeId: tools.UuidStringToNullUuid("295f5bd9-772a-41f0-aa81-530a0678e441"),
- AttributeIdNm: pgtype.UUID{},
- Aggregator: pgtype.Text{},
- Index: 1,
- },
- },
- IndexSource: 0,
- Joins: []types.DataGetJoin{
- types.DataGetJoin{ // author via module author
- AttributeId: uuid.FromStringOrNil("a72f2de6-e1ee-4432-804b-b57f44013f4c"),
- Index: 1,
- IndexFrom: 0,
- Connector: "INNER",
- },
- },
- }
-
- var res struct {
- Count int `json:"count"`
- Rows []types.DataGetResult `json:"rows"`
- }
- if err := post(url, req, &res, skipVerify); err != nil {
- return err
- }
-
- for _, row := range res.Rows {
- if len(row.Values) != 5 {
- return errors.New("invalid value count for store module")
- }
-
- repo := types.RepoModule{}
-
- for i, value := range row.Values {
- switch i {
- case 0:
- repo.ModuleId = uuid.FromStringOrNil(value.(string))
- case 1:
- repo.Name = value.(string)
- case 2:
- repo.InStore = value.(bool)
- case 3:
- repo.ChangeLog = pgtype.Text{}
- if value != nil {
- repo.ChangeLog = pgtype.Text{
- String: value.(string),
- Valid: true,
- }
- }
- case 4:
- repo.Author = value.(string)
- }
- }
- repo.LanguageCodeMeta = make(map[string]types.RepoModuleMeta)
- repoModuleMap[repo.ModuleId] = repo
- }
- return nil
-}
diff --git a/repo/repo_update_releases.go b/repo/repo_update_releases.go
deleted file mode 100644
index 87337368..00000000
--- a/repo/repo_update_releases.go
+++ /dev/null
@@ -1,168 +0,0 @@
-package repo
-
-import (
- "encoding/json"
- "errors"
- "fmt"
- "r3/compatible"
- "r3/tools"
- "r3/types"
-
- "github.com/gofrs/uuid"
- "github.com/jackc/pgx/v5/pgtype"
-)
-
-func getModuleReleases(token string, url string, skipVerify bool,
- repoModuleMap map[uuid.UUID]types.RepoModule, lastRun uint64) error {
-
- var req struct {
- Token string `json:"token"`
- Action string `json:"action"`
- Request types.DataGet `json:"request"`
- }
- req.Token = token
- req.Action = "get"
-
- req.Request = types.DataGet{
- RelationId: uuid.FromStringOrNil("a300afae-a8c5-4cfc-9375-d85f45c6347c"), // module release
- Expressions: []types.DataGetExpression{
- types.DataGetExpression{ // module UUID
- AttributeId: tools.UuidStringToNullUuid("98bc635b-097e-4cf0-92c9-2bb97a7c2a5e"),
- AttributeIdNm: pgtype.UUID{},
- Aggregator: pgtype.Text{},
- Index: 1,
- },
- types.DataGetExpression{ // module release build
- AttributeId: tools.UuidStringToNullUuid("d0766fcc-7a68-490c-9c81-f542ad37109b"),
- AttributeIdNm: pgtype.UUID{},
- Aggregator: pgtype.Text{},
- Index: 0,
- },
- types.DataGetExpression{ // module release application build
- AttributeId: tools.UuidStringToNullUuid("ce998cfd-a66f-423c-b82b-d2b48a21c288"),
- AttributeIdNm: pgtype.UUID{},
- Aggregator: pgtype.Text{},
- Index: 0,
- },
- types.DataGetExpression{ // module release date
- AttributeId: tools.UuidStringToNullUuid("9f9b6cda-069d-405b-bbb8-c0d12bbce910"),
- AttributeIdNm: pgtype.UUID{},
- Aggregator: pgtype.Text{},
- Index: 0,
- },
- types.DataGetExpression{ // module release file
- AttributeId: tools.UuidStringToNullUuid("b28e8f5c-ebeb-4565-941b-4d942eedc588"),
- AttributeIdNm: pgtype.UUID{},
- Aggregator: pgtype.Text{},
- Index: 0,
- },
- },
- IndexSource: 0,
- Joins: []types.DataGetJoin{
- types.DataGetJoin{ // module
- AttributeId: uuid.FromStringOrNil("922dc949-873f-4a21-9699-8740c0491b3a"),
- Index: 1,
- IndexFrom: 0,
- Connector: "INNER",
- },
- },
- Filters: []types.DataGetFilter{
- types.DataGetFilter{
- Connector: "AND",
- Operator: ">",
- Side0: types.DataGetFilterSide{
- AttributeId: pgtype.UUID{ // module release date
- Bytes: uuid.FromStringOrNil("9f9b6cda-069d-405b-bbb8-c0d12bbce910"),
- Valid: true,
- },
- QueryAggregator: pgtype.Text{},
- },
- Side1: types.DataGetFilterSide{
- AttributeId: pgtype.UUID{},
- QueryAggregator: pgtype.Text{},
- Value: lastRun,
- },
- },
- },
- Orders: []types.DataGetOrder{
- types.DataGetOrder{
- AttributeId: pgtype.UUID{ // module release build
- Bytes: uuid.FromStringOrNil("d0766fcc-7a68-490c-9c81-f542ad37109b"),
- Valid: true,
- },
- Index: pgtype.Int4{
- Int32: 0,
- Valid: true,
- },
- ExpressionPos: pgtype.Int4{},
- Ascending: false,
- },
- },
- }
-
- var res struct {
- Count int `json:"count"`
- Rows []types.DataGetResult `json:"rows"`
- }
- if err := post(url, req, &res, skipVerify); err != nil {
- return err
- }
-
- moduleIdsAdded := make([]uuid.UUID, 0)
-
- for _, row := range res.Rows {
- if len(row.Values) != 5 {
- return errors.New("invalid value count for store module release")
- }
-
- repoModule := types.RepoModule{}
-
- for i, value := range row.Values {
-
- switch i {
- case 0:
- moduleId := uuid.FromStringOrNil(value.(string))
-
- // add only first release per module (are sorted descending by build)
- if tools.UuidInSlice(moduleId, moduleIdsAdded) {
- break
- }
-
- if _, exists := repoModuleMap[moduleId]; !exists {
- return errors.New("release for non-existing module")
- }
- repoModule = repoModuleMap[moduleId]
- case 1:
- repoModule.ReleaseBuild = int(value.(float64))
- case 2:
- repoModule.ReleaseBuildApp = int(value.(float64))
- case 3:
- repoModule.ReleaseDate = int(value.(float64))
- case 4:
- if value == nil {
- return fmt.Errorf("no files for module release")
- }
-
- filesJson, err := json.Marshal(value)
- if err != nil {
- return err
- }
-
- files := compatible.FixLegacyFileAttributeValue(filesJson)
-
- if len(files) != 1 {
- return fmt.Errorf("module release must have exactly 1 file, count: %d",
- len(files))
- }
- repoModule.FileId = files[0].Id
- moduleIdsAdded = append(moduleIdsAdded, repoModule.ModuleId)
- }
- }
-
- // only the latest release is used, module ID is not set for subsequent ones
- if repoModule.ModuleId != uuid.Nil {
- repoModuleMap[repoModule.ModuleId] = repoModule
- }
- }
- return nil
-}
diff --git a/request/request.go b/request/request.go
index df84941c..a8670940 100644
--- a/request/request.go
+++ b/request/request.go
@@ -10,41 +10,40 @@ import (
"r3/config"
"r3/db"
"r3/handler"
+ "r3/ldap"
"r3/log"
"r3/types"
- "strconv"
- "time"
"github.com/jackc/pgx/v5"
)
-func ExecTransaction(ctxClient context.Context, loginId int64, isAdmin bool,
- isNoAuth bool, reqTrans types.RequestTransaction,
- resTrans types.ResponseTransaction) types.ResponseTransaction {
+// executes a websocket transaction with multiple requests within a single DB transaction
+func ExecTransaction(ctx context.Context, address string, loginId int64, isAdmin bool, device types.WebsocketClientDevice,
+ isNoAuth bool, reqTrans types.RequestTransaction, clearDbCache bool) ([]types.Response, error) {
- // start transaction
- ctx, ctxCancel := context.WithTimeout(ctxClient,
- time.Duration(int64(config.GetUint64("dbTimeoutDataWs")))*time.Second)
-
- defer ctxCancel()
+ responses := make([]types.Response, 0)
tx, err := db.Pool.Begin(ctx)
if err != nil {
log.Error("websocket", "cannot begin transaction", err)
- resTrans.Error = handler.ErrGeneral
- return resTrans
+ return responses, errors.New(handler.ErrGeneral)
+ }
+ defer tx.Rollback(ctx)
+
+ if clearDbCache {
+ if err := tx.Conn().DeallocateAll(ctx); err != nil {
+ log.Error("websocket", "failed to deallocate DB connection", err)
+ return responses, err
+ }
}
- // set local transaction configuration parameters
- // these are used by system functions, such as instance.get_login_id()
- if _, err := tx.Exec(ctx, `
- SELECT SET_CONFIG('r3.login_id',$1,TRUE)
- `, strconv.FormatInt(loginId, 10)); err != nil {
+ // set session parameters, used by system functions such as instance.get_user_id()
+ if err := db.SetSessionConfig_tx(ctx, tx, loginId); err != nil {
log.Error("websocket", fmt.Sprintf("TRANSACTION %d, transaction config failure (login ID %d)",
reqTrans.TransactionNr, loginId), err)
- return resTrans
+ return responses, err
}
// work through requests
@@ -53,55 +52,40 @@ func ExecTransaction(ctxClient context.Context, loginId int64, isAdmin bool,
log.Info("websocket", fmt.Sprintf("TRANSACTION %d, %s %s, payload: %s",
reqTrans.TransactionNr, req.Action, req.Ressource, req.Payload))
- payload, err := Exec_tx(ctx, tx, loginId, isAdmin, isNoAuth,
- req.Ressource, req.Action, req.Payload)
-
- if err == nil {
- // all clear, prepare response payload
- var res types.Response
- res.Payload, err = json.Marshal(payload)
- if err == nil {
- resTrans.Responses = append(resTrans.Responses, res)
- continue
+ payload, err := Exec_tx(ctx, tx, address, loginId, isAdmin, device, isNoAuth, req.Ressource, req.Action, req.Payload)
+ if err != nil {
+ returnErr, isExpected := handler.ConvertToErrCode(err, !isAdmin)
+ if !isExpected {
+ log.Warning("websocket", fmt.Sprintf("TRANSACTION %d, request %s %s failure (login ID %d)",
+ reqTrans.TransactionNr, req.Ressource, req.Action, loginId), err)
}
+ return responses, returnErr
}
- // error case, convert to error code for requestor
- returnErr, isExpectedErr := handler.ConvertToErrCode(err, !isAdmin)
- if !isExpectedErr {
- log.Warning("websocket", fmt.Sprintf("TRANSACTION %d, request %s %s failure (login ID %d)",
- reqTrans.TransactionNr, req.Ressource, req.Action, loginId), err)
+ var res types.Response
+ res.Payload, err = json.Marshal(payload)
+ if err != nil {
+ return responses, err
}
-
- resTrans.Error = fmt.Sprintf("%v", returnErr)
- resTrans.Responses = make([]types.Response, 0) // clear all responses
- break
+ responses = append(responses, res)
}
- // check if error occured in any request
- if resTrans.Error == "" {
- if err := tx.Commit(ctx); err != nil {
-
- returnErr, isExpectedErr := handler.ConvertToErrCode(err, !isAdmin)
- if !isExpectedErr {
- log.Warning("websocket", fmt.Sprintf("TRANSACTION %d, commit failure (login ID %d)",
- reqTrans.TransactionNr, loginId), err)
- }
- resTrans.Error = fmt.Sprintf("%v", returnErr)
- resTrans.Responses = make([]types.Response, 0) // clear all responses
-
- tx.Rollback(ctx)
+ if err := tx.Commit(ctx); err != nil {
+ returnErr, isExpected := handler.ConvertToErrCode(err, !isAdmin)
+ if !isExpected {
+ log.Warning("websocket", fmt.Sprintf("TRANSACTION %d, commit failure (login ID %d)",
+ reqTrans.TransactionNr, loginId), err)
}
- } else {
- tx.Rollback(ctx)
+ return responses, returnErr
}
- return resTrans
+ return responses, nil
}
-func Exec_tx(ctx context.Context, tx pgx.Tx, loginId int64, isAdmin bool, isNoAuth bool,
- ressource string, action string, reqJson json.RawMessage) (interface{}, error) {
+func Exec_tx(ctx context.Context, tx pgx.Tx, address string, loginId int64, isAdmin bool,
+ device types.WebsocketClientDevice, isNoAuth bool, ressource string, action string,
+ reqJson json.RawMessage) (interface{}, error) {
- // public requests
+ // public requests: accessible to all
switch ressource {
case "public":
switch action {
@@ -110,11 +94,30 @@ func Exec_tx(ctx context.Context, tx pgx.Tx, loginId int64, isAdmin bool, isNoAu
}
}
- // authorized requests: non-admin
if loginId == 0 {
return nil, errors.New(handler.ErrUnauthorized)
}
+ // authorized requests: fat-client
+ if device == types.WebsocketClientDeviceFatClient {
+ switch ressource {
+ case "clientApp":
+ switch action {
+ case "getBuild": // current client app build
+ return config.GetAppVersionClient().Build, nil
+ }
+ case "clientEvent":
+ switch action {
+ case "exec":
+ return clientEventExecFatClient_tx(ctx, tx, reqJson, loginId, address)
+ case "get":
+ return clientEventGetFatClient_tx(ctx, tx, loginId)
+ }
+ }
+ return nil, errors.New(handler.ErrUnauthorized)
+ }
+
+ // authorized requests: non-admin
switch ressource {
case "data":
switch action {
@@ -131,66 +134,117 @@ func Exec_tx(ctx context.Context, tx pgx.Tx, loginId int64, isAdmin bool, isNoAu
case "setKeys":
return DataSetKeys_tx(ctx, tx, reqJson)
}
+ case "event":
+ switch action {
+ case "clientEventsChanged":
+ return eventClientEventsChanged_tx(ctx, tx, loginId, address)
+ case "filesCopied":
+ return eventFilesCopied_tx(ctx, tx, reqJson, loginId, address)
+ case "fileRequested":
+ return eventFileRequested_tx(ctx, tx, reqJson, loginId, address)
+ case "keystrokesRequested":
+ return eventKeystrokesRequested_tx(ctx, tx, reqJson, loginId, address)
+ }
case "feedback":
switch action {
case "send":
- return FeedbackSend_tx(tx, reqJson)
+ return FeedbackSend(reqJson)
}
case "file":
switch action {
- case "copy":
- return FilesCopy(reqJson, loginId)
case "paste":
- return FilesPaste(reqJson, loginId)
- case "request":
- return FileRequest(reqJson, loginId)
+ return filesPaste_tx(ctx, tx, reqJson, loginId)
}
case "login":
switch action {
case "getNames":
- return LoginGetNames(reqJson)
+ return LoginGetNames_tx(ctx, tx, reqJson)
case "delTokenFixed":
- return LoginDelTokenFixed(reqJson, loginId)
+ return LoginDelTokenFixed_tx(ctx, tx, reqJson, loginId)
case "getTokensFixed":
- return LoginGetTokensFixed(loginId)
+ return LoginGetTokensFixed_tx(ctx, tx, loginId)
case "setTokenFixed":
- return LoginSetTokenFixed_tx(tx, reqJson, loginId)
+ return LoginSetTokenFixed_tx(ctx, tx, reqJson, loginId)
+ }
+ case "loginClientEvent":
+ switch action {
+ case "del":
+ return loginClientEventDel_tx(ctx, tx, reqJson, loginId)
+ case "get":
+ return loginClientEventGet_tx(ctx, tx, loginId)
+ case "set":
+ return loginClientEventSet_tx(ctx, tx, reqJson, loginId)
+ }
+ case "loginFavorites":
+ switch action {
+ case "add":
+ if isNoAuth {
+ return nil, errors.New(handler.ErrUnauthorized)
+ }
+ return LoginAddFavorites_tx(ctx, tx, reqJson, loginId)
+ case "get":
+ return LoginGetFavorites_tx(ctx, tx, reqJson, loginId, isNoAuth)
+ case "set":
+ if isNoAuth {
+ return nil, errors.New(handler.ErrUnauthorized)
+ }
+ return LoginSetFavorites_tx(ctx, tx, reqJson, loginId)
}
case "loginKeys":
switch action {
case "getPublic":
- return LoginKeysGetPublic(ctx, reqJson)
+ return LoginKeysGetPublic_tx(ctx, tx, reqJson)
case "reset":
- return LoginKeysReset_tx(tx, loginId)
+ return LoginKeysReset_tx(ctx, tx, loginId)
case "store":
- return LoginKeysStore_tx(tx, reqJson, loginId)
+ return LoginKeysStore_tx(ctx, tx, reqJson, loginId)
case "storePrivate":
- return LoginKeysStorePrivate_tx(tx, reqJson, loginId)
+ return LoginKeysStorePrivate_tx(ctx, tx, reqJson, loginId)
}
- case "lookup":
+ case "loginOptions":
switch action {
case "get":
- return LookupGet(reqJson, loginId)
- }
- case "password":
- switch action {
+ return LoginOptionsGet_tx(ctx, tx, reqJson, loginId, isNoAuth)
case "set":
- return PasswortSet_tx(tx, reqJson, loginId)
+ if isNoAuth {
+ return nil, errors.New(handler.ErrUnauthorized)
+ }
+ return LoginOptionsSet_tx(ctx, tx, reqJson, loginId)
}
- case "pgFunction":
+ case "loginPassword":
switch action {
- case "exec": // user may exec non-trigger backend function, available to frontend
- return PgFunctionExec_tx(tx, reqJson, true)
+ case "set":
+ if isNoAuth {
+ return nil, errors.New(handler.ErrUnauthorized)
+ }
+ return loginPasswortSet_tx(ctx, tx, reqJson, loginId)
}
- case "setting":
+ case "loginSetting":
switch action {
case "get":
- return SettingsGet(loginId)
+ return LoginSettingsGet_tx(ctx, tx, loginId)
case "set":
if isNoAuth {
return nil, errors.New(handler.ErrUnauthorized)
}
- return SettingsSet_tx(tx, reqJson, loginId)
+ return LoginSettingsSet_tx(ctx, tx, reqJson, loginId)
+ }
+ case "loginWidgetGroups":
+ switch action {
+ case "get":
+ return LoginWidgetGroupsGet_tx(ctx, tx, loginId)
+ case "set":
+ return LoginWidgetGroupsSet_tx(ctx, tx, reqJson, loginId)
+ }
+ case "lookup":
+ switch action {
+ case "get":
+ return lookupGet_tx(ctx, tx, reqJson, loginId)
+ }
+ case "pgFunction":
+ switch action {
+ case "exec": // user may exec non-trigger backend function, available to frontend
+ return PgFunctionExec_tx(ctx, tx, reqJson, true)
}
}
@@ -203,29 +257,29 @@ func Exec_tx(ctx context.Context, tx pgx.Tx, loginId int64, isAdmin bool, isNoAu
case "api":
switch action {
case "copy":
- return ApiCopy_tx(tx, reqJson)
+ return ApiCopy_tx(ctx, tx, reqJson)
case "del":
- return ApiDel_tx(tx, reqJson)
+ return ApiDel_tx(ctx, tx, reqJson)
case "set":
- return ApiSet_tx(tx, reqJson)
+ return ApiSet_tx(ctx, tx, reqJson)
}
case "article":
switch action {
case "assign":
- return ArticleAssign_tx(tx, reqJson)
+ return ArticleAssign_tx(ctx, tx, reqJson)
case "del":
- return ArticleDel_tx(tx, reqJson)
+ return ArticleDel_tx(ctx, tx, reqJson)
case "set":
- return ArticleSet_tx(tx, reqJson)
+ return ArticleSet_tx(ctx, tx, reqJson)
}
case "attribute":
switch action {
case "del":
- return AttributeDel_tx(tx, reqJson)
- case "get":
- return AttributeGet(reqJson)
+ return AttributeDel_tx(ctx, tx, reqJson)
+ case "delCheck":
+ return AttributeDelCheck_tx(ctx, tx, reqJson)
case "set":
- return AttributeSet_tx(tx, reqJson)
+ return AttributeSet_tx(ctx, tx, reqJson)
}
case "backup":
switch action {
@@ -237,30 +291,44 @@ func Exec_tx(ctx context.Context, tx pgx.Tx, loginId int64, isAdmin bool, isNoAu
case "get":
return BruteforceGet(reqJson)
}
+ case "captionMap":
+ switch action {
+ case "get":
+ return CaptionMapGet_tx(ctx, tx, reqJson)
+ case "setOne":
+ return CaptionMapSetOne_tx(ctx, tx, reqJson)
+ }
+ case "clientEvent":
+ switch action {
+ case "del":
+ return clientEventDel_tx(ctx, tx, reqJson)
+ case "set":
+ return clientEventSet_tx(ctx, tx, reqJson)
+ }
case "collection":
switch action {
case "del":
- return CollectionDel_tx(tx, reqJson)
+ return CollectionDel_tx(ctx, tx, reqJson)
case "set":
- return CollectionSet_tx(tx, reqJson)
+ return CollectionSet_tx(ctx, tx, reqJson)
}
case "config":
switch action {
case "get":
return ConfigGet()
case "set":
- return ConfigSet_tx(tx, reqJson)
+ return ConfigSet_tx(ctx, tx, reqJson)
}
case "cluster":
switch action {
case "delNode":
- return ClusterNodeDel_tx(tx, reqJson)
+ return ClusterNodeDel_tx(ctx, tx, reqJson)
case "getNodes":
- return ClusterNodesGet()
+ return ClusterNodesGet_tx(ctx, tx)
case "setNode":
- return ClusterNodeSet_tx(tx, reqJson)
+ return ClusterNodeSet_tx(ctx, tx, reqJson)
case "shutdownNode":
- return ClusterNodeShutdown(reqJson)
+ return ClusterNodeShutdown_tx(ctx, tx, reqJson)
}
case "dataSql":
switch action {
@@ -270,41 +338,37 @@ func Exec_tx(ctx context.Context, tx pgx.Tx, loginId int64, isAdmin bool, isNoAu
case "field":
switch action {
case "del":
- return FieldDel_tx(tx, reqJson)
+ return FieldDel_tx(ctx, tx, reqJson)
}
case "file":
switch action {
case "get":
- return FileGet()
+ return FileGet_tx(ctx, tx)
case "restore":
- return FileRestore(reqJson)
+ return FileRestore_tx(ctx, tx, reqJson)
}
case "form":
switch action {
case "copy":
- return FormCopy_tx(tx, reqJson)
+ return FormCopy_tx(ctx, tx, reqJson)
case "del":
- return FormDel_tx(tx, reqJson)
- case "get":
- return FormGet(reqJson)
+ return FormDel_tx(ctx, tx, reqJson)
case "set":
- return FormSet_tx(tx, reqJson)
+ return FormSet_tx(ctx, tx, reqJson)
}
case "icon":
switch action {
case "del":
- return IconDel_tx(tx, reqJson)
+ return IconDel_tx(ctx, tx, reqJson)
case "setName":
- return IconSetName_tx(tx, reqJson)
+ return IconSetName_tx(ctx, tx, reqJson)
}
case "jsFunction":
switch action {
case "del":
- return JsFunctionDel_tx(tx, reqJson)
- case "get":
- return JsFunctionGet(reqJson)
+ return JsFunctionDel_tx(ctx, tx, reqJson)
case "set":
- return JsFunctionSet_tx(tx, reqJson)
+ return JsFunctionSet_tx(ctx, tx, reqJson)
}
case "key":
switch action {
@@ -316,219 +380,246 @@ func Exec_tx(ctx context.Context, tx pgx.Tx, loginId int64, isAdmin bool, isNoAu
case "check":
return LdapCheck(reqJson)
case "del":
- return LdapDel_tx(tx, reqJson)
+ return LdapDel_tx(ctx, tx, reqJson)
case "get":
- return LdapGet()
- case "import":
- return LdapImport(reqJson)
+ return LdapGet_tx(ctx, tx)
case "reload":
- return nil, cache.LoadLdapMap()
+ return nil, ldap.UpdateCache_tx(ctx, tx)
case "set":
- return LdapSet_tx(tx, reqJson)
+ return LdapSet_tx(ctx, tx, reqJson)
}
case "license":
switch action {
+ case "del":
+ return LicenseDel_tx(ctx, tx)
case "get":
- return config.License, nil
+ return config.GetLicense(), nil
}
case "log":
switch action {
case "get":
- return LogGet(reqJson)
+ return LogGet_tx(ctx, tx, reqJson)
}
case "login":
switch action {
case "del":
- return LoginDel_tx(tx, reqJson)
+ return LoginDel_tx(ctx, tx, reqJson)
case "get":
- return LoginGet(reqJson)
+ return LoginGet_tx(ctx, tx, reqJson)
+ case "getIsNotUnique":
+ return LoginGetIsNotUnique_tx(ctx, tx, reqJson)
case "getMembers":
- return LoginGetMembers(reqJson)
+ return LoginGetMembers_tx(ctx, tx, reqJson)
case "getRecords":
- return LoginGetRecords(reqJson)
+ return LoginGetRecords_tx(ctx, tx, reqJson)
case "kick":
- return LoginKick(reqJson)
+ return LoginKick(ctx, tx, reqJson)
case "reauth":
- return LoginReauth(reqJson)
+ return LoginReauth_tx(ctx, tx, reqJson)
case "reauthAll":
- return LoginReauthAll()
+ return LoginReauthAll_tx(ctx, tx)
case "resetTotp":
- return LoginResetTotp_tx(tx, reqJson)
+ return LoginResetTotp_tx(ctx, tx, reqJson)
case "set":
- return LoginSet_tx(tx, reqJson)
+ return LoginSet_tx(ctx, tx, reqJson)
case "setMembers":
- return LoginSetMembers_tx(tx, reqJson)
+ return LoginSetMembers_tx(ctx, tx, reqJson)
}
case "loginForm":
switch action {
case "del":
- return LoginFormDel_tx(tx, reqJson)
- case "get":
- return LoginFormGet(reqJson)
+ return LoginFormDel_tx(ctx, tx, reqJson)
case "set":
- return LoginFormSet_tx(tx, reqJson)
+ return LoginFormSet_tx(ctx, tx, reqJson)
}
- case "loginTemplate":
+ case "loginSession":
switch action {
- case "del":
- return LoginTemplateDel_tx(tx, reqJson)
case "get":
- return LoginTemplateGet(reqJson)
- case "set":
- return LoginTemplateSet_tx(tx, reqJson)
+ return LoginSessionsGet_tx(ctx, tx, reqJson)
+ case "getConcurrent":
+ return LoginSessionConcurrentGet_tx(ctx, tx)
}
- case "mail":
+ case "loginTemplate":
switch action {
case "del":
- return MailDel_tx(tx, reqJson)
+ return LoginTemplateDel_tx(ctx, tx, reqJson)
case "get":
- return MailGet(reqJson)
+ return LoginTemplateGet_tx(ctx, tx, reqJson)
+ case "set":
+ return LoginTemplateSet_tx(ctx, tx, reqJson)
}
case "mailAccount":
switch action {
case "del":
- return MailAccountDel_tx(tx, reqJson)
+ return MailAccountDel_tx(ctx, tx, reqJson)
case "get":
return MailAccountGet()
case "reload":
- return MailAccountReload()
+ return nil, cache.LoadMailAccountMap_tx(ctx, tx)
case "set":
- return MailAccountSet_tx(tx, reqJson)
+ return MailAccountSet_tx(ctx, tx, reqJson)
case "test":
- return MailAccountTest_tx(tx, reqJson)
+ return MailAccountTest_tx(ctx, tx, reqJson)
}
- case "menu":
+ case "mailSpooler":
switch action {
- case "copy":
- return MenuCopy_tx(tx, reqJson)
case "del":
- return MenuDel_tx(tx, reqJson)
+ return MailSpoolerDel_tx(ctx, tx, reqJson)
+ case "get":
+ return MailSpoolerGet_tx(ctx, tx, reqJson)
+ case "reset":
+ return MailSpoolerReset_tx(ctx, tx, reqJson)
+ }
+ case "mailTraffic":
+ switch action {
case "get":
- return MenuGet(reqJson)
+ return MailTrafficGet_tx(ctx, tx, reqJson)
+ }
+ case "menuTab":
+ switch action {
+ case "del":
+ return MenuTabDel_tx(ctx, tx, reqJson)
case "set":
- return MenuSet_tx(tx, reqJson)
+ return MenuTabSet_tx(ctx, tx, reqJson)
}
case "module":
switch action {
case "checkChange":
- return ModuleCheckChange_tx(tx, reqJson)
+ return ModuleCheckChange_tx(ctx, tx, reqJson)
case "del":
- return ModuleDel_tx(tx, reqJson)
- case "get":
- return ModuleGet()
+ return ModuleDel_tx(ctx, tx, reqJson)
case "set":
- return ModuleSet_tx(tx, reqJson)
+ return ModuleSet_tx(ctx, tx, reqJson)
}
- case "moduleOption":
+ case "moduleMeta":
switch action {
+ case "setLanguagesCustom":
+ return ModuleMetaSetLanguagesCustom_tx(ctx, tx, reqJson)
+ case "setOptions":
+ return ModuleMetaSetOptions_tx(ctx, tx, reqJson)
+ }
+ case "oauthClient":
+ switch action {
+ case "del":
+ return OauthClientDel_tx(ctx, tx, reqJson)
case "get":
- return ModuleOptionGet()
+ return OauthClientGet()
+ case "reload":
+ return OauthClientReload_tx(ctx, tx)
case "set":
- return ModuleOptionSet_tx(tx, reqJson)
+ return OauthClientSet_tx(ctx, tx, reqJson)
}
case "package":
switch action {
case "install":
- return PackageInstall()
+ return PackageInstall_tx(ctx, tx)
}
case "pgFunction":
switch action {
case "del":
- return PgFunctionDel_tx(tx, reqJson)
+ return PgFunctionDel_tx(ctx, tx, reqJson)
case "execAny": // admin may exec any non-trigger backend function
- return PgFunctionExec_tx(tx, reqJson, false)
- case "get":
- return PgFunctionGet(reqJson)
+ return PgFunctionExec_tx(ctx, tx, reqJson, false)
case "set":
- return PgFunctionSet_tx(tx, reqJson)
+ return PgFunctionSet_tx(ctx, tx, reqJson)
}
case "pgIndex":
switch action {
case "del":
- return PgIndexDel_tx(tx, reqJson)
- case "get":
- return PgIndexGet(reqJson)
+ return PgIndexDel_tx(ctx, tx, reqJson)
case "set":
- return PgIndexSet_tx(tx, reqJson)
+ return PgIndexSet_tx(ctx, tx, reqJson)
}
case "pgTrigger":
switch action {
case "del":
- return PgTriggerDel_tx(tx, reqJson)
+ return PgTriggerDel_tx(ctx, tx, reqJson)
case "set":
- return PgTriggerSet_tx(tx, reqJson)
+ return PgTriggerSet_tx(ctx, tx, reqJson)
}
case "preset":
switch action {
case "del":
- return PresetDel_tx(tx, reqJson)
+ return PresetDel_tx(ctx, tx, reqJson)
+ case "set":
+ return PresetSet_tx(ctx, tx, reqJson)
+ }
+ case "pwaDomain":
+ switch action {
+ case "reset":
+ return nil, cache.LoadPwaDomainMap_tx(ctx, tx)
case "set":
- return PresetSet_tx(tx, reqJson)
+ return PwaDomainSet_tx(ctx, tx, reqJson)
}
case "relation":
switch action {
case "del":
- return RelationDel_tx(tx, reqJson)
- case "get":
- return RelationGet(reqJson)
+ return RelationDel_tx(ctx, tx, reqJson)
case "preview":
- return RelationPreview(reqJson)
+ return RelationPreview_tx(ctx, tx, reqJson)
case "set":
- return RelationSet_tx(tx, reqJson)
+ return RelationSet_tx(ctx, tx, reqJson)
}
case "repoModule":
switch action {
case "get":
- return RepoModuleGet(reqJson)
+ return RepoModuleGet_tx(ctx, tx, reqJson)
case "install":
- return RepoModuleInstall(reqJson)
+ return RepoModuleInstall_tx(ctx, tx, reqJson)
case "installAll":
- return RepoModuleInstallAll()
+ return RepoModuleInstallAll_tx(ctx, tx)
case "update":
- return RepoModuleUpdate()
+ return RepoModuleUpdate_tx(ctx, tx)
}
case "role":
switch action {
case "del":
- return RoleDel_tx(tx, reqJson)
- case "get":
- return RoleGet(reqJson)
+ return RoleDel_tx(ctx, tx, reqJson)
case "set":
- return RoleSet_tx(tx, reqJson)
+ return RoleSet_tx(ctx, tx, reqJson)
}
case "scheduler":
switch action {
case "get":
- return Get()
+ return schedulersGet_tx(ctx, tx)
}
case "schema":
switch action {
case "check":
- return SchemaCheck_tx(tx, reqJson)
+ return SchemaCheck_tx(ctx, tx, reqJson)
case "reload":
- return SchemaReload(reqJson)
- }
- case "system":
- switch action {
- case "get":
- return SystemGet()
+ return SchemaReload_tx(ctx, tx, reqJson)
}
case "task":
switch action {
case "informChanged":
- return nil, cluster.TasksChanged(true)
+ return nil, cluster.TasksChanged_tx(ctx, tx, true)
case "run":
- return TaskRun(reqJson)
+ return TaskRun_tx(ctx, tx, reqJson)
case "set":
- return TaskSet_tx(tx, reqJson)
+ return TaskSet_tx(ctx, tx, reqJson)
}
case "transfer":
switch action {
case "addVersion":
- return TransferAddVersion_tx(tx, reqJson)
+ return TransferAddVersion_tx(ctx, tx, reqJson)
case "storeExportKey":
return TransferStoreExportKey(reqJson)
}
+ case "variable":
+ switch action {
+ case "del":
+ return VariableDel_tx(ctx, tx, reqJson)
+ case "set":
+ return VariableSet_tx(ctx, tx, reqJson)
+ }
+ case "widget":
+ switch action {
+ case "del":
+ return WidgetDel_tx(ctx, tx, reqJson)
+ case "set":
+ return WidgetSet_tx(ctx, tx, reqJson)
+ }
}
return nil, fmt.Errorf("unknown ressource or action")
}
diff --git a/request/request_api.go b/request/request_api.go
index 209a937b..d43ce989 100644
--- a/request/request_api.go
+++ b/request/request_api.go
@@ -1,6 +1,7 @@
package request
import (
+ "context"
"encoding/json"
"r3/schema/api"
"r3/types"
@@ -9,7 +10,7 @@ import (
"github.com/jackc/pgx/v5"
)
-func ApiCopy_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+func ApiCopy_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var req struct {
Id uuid.UUID `json:"id"`
@@ -17,10 +18,10 @@ func ApiCopy_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- return nil, api.Copy_tx(tx, req.Id)
+ return nil, api.Copy_tx(ctx, tx, req.Id)
}
-func ApiDel_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+func ApiDel_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var req struct {
Id uuid.UUID `json:"id"`
@@ -28,14 +29,14 @@ func ApiDel_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- return nil, api.Del_tx(tx, req.Id)
+ return nil, api.Del_tx(ctx, tx, req.Id)
}
-func ApiSet_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+func ApiSet_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var req types.Api
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- return nil, api.Set_tx(tx, req)
+ return nil, api.Set_tx(ctx, tx, req)
}
diff --git a/request/request_article.go b/request/request_article.go
index 27f9d2e1..8d2410d6 100644
--- a/request/request_article.go
+++ b/request/request_article.go
@@ -1,6 +1,7 @@
package request
import (
+ "context"
"encoding/json"
"r3/schema/article"
"r3/types"
@@ -9,7 +10,7 @@ import (
"github.com/jackc/pgx/v5"
)
-func ArticleAssign_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+func ArticleAssign_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var req struct {
Target string `json:"target`
TargetId uuid.UUID `json:"targetId"`
@@ -18,10 +19,10 @@ func ArticleAssign_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- return nil, article.Assign_tx(tx, req.Target, req.TargetId, req.ArticleIds)
+ return nil, article.Assign_tx(ctx, tx, req.Target, req.TargetId, req.ArticleIds)
}
-func ArticleDel_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+func ArticleDel_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var req struct {
Id uuid.UUID `json:"id"`
}
@@ -29,13 +30,13 @@ func ArticleDel_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- return nil, article.Del_tx(tx, req.Id)
+ return nil, article.Del_tx(ctx, tx, req.Id)
}
-func ArticleSet_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+func ArticleSet_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var req types.Article
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- return nil, article.Set_tx(tx, req.ModuleId, req.Id, req.Name, req.Captions)
+ return nil, article.Set_tx(ctx, tx, req.ModuleId, req.Id, req.Name, req.Captions)
}
diff --git a/request/request_attribute.go b/request/request_attribute.go
index ed1dc13e..09af36bd 100644
--- a/request/request_attribute.go
+++ b/request/request_attribute.go
@@ -1,6 +1,7 @@
package request
import (
+ "context"
"encoding/json"
"r3/schema/attribute"
"r3/types"
@@ -9,45 +10,30 @@ import (
"github.com/jackc/pgx/v5"
)
-func AttributeDel_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+func AttributeDelCheck_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var req struct {
Id uuid.UUID `json:"id"`
}
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- return nil, attribute.Del_tx(tx, req.Id)
+ return attribute.DelCheck_tx(ctx, tx, req.Id)
}
-func AttributeGet(reqJson json.RawMessage) (interface{}, error) {
- var (
- err error
- req struct {
- RelationId uuid.UUID `json:"relationId"`
- }
- res struct {
- Attributes []types.Attribute `json:"attributes"`
- }
- )
-
- if err := json.Unmarshal(reqJson, &req); err != nil {
- return nil, err
+func AttributeDel_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+ var req struct {
+ Id uuid.UUID `json:"id"`
}
-
- res.Attributes, err = attribute.Get(req.RelationId)
- if err != nil {
+ if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- return res, nil
+ return nil, attribute.Del_tx(ctx, tx, req.Id)
}
-func AttributeSet_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+func AttributeSet_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var req types.Attribute
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- return nil, attribute.Set_tx(tx, req.RelationId, req.Id, req.RelationshipId,
- req.IconId, req.Name, req.Content, req.ContentUse, req.Length,
- req.Nullable, req.Encrypted, req.Def, req.OnUpdate, req.OnDelete,
- req.Captions)
+ return nil, attribute.Set_tx(ctx, tx, req)
}
diff --git a/request/request_captionMap.go b/request/request_captionMap.go
new file mode 100644
index 00000000..62aa5db8
--- /dev/null
+++ b/request/request_captionMap.go
@@ -0,0 +1,41 @@
+package request
+
+import (
+ "context"
+ "encoding/json"
+ "r3/config/captionMap"
+
+ "github.com/gofrs/uuid"
+ "github.com/jackc/pgx/v5"
+ "github.com/jackc/pgx/v5/pgtype"
+)
+
+func CaptionMapGet_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+
+ var req struct {
+ ModuleId pgtype.UUID `json:"moduleId"`
+ Target string `json:"target"`
+ }
+
+ if err := json.Unmarshal(reqJson, &req); err != nil {
+ return nil, err
+ }
+ return captionMap.Get_tx(ctx, tx, req.ModuleId, req.Target)
+}
+
+func CaptionMapSetOne_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+
+ var req struct {
+ Content string `json:"content"`
+ EntityId uuid.UUID `json:"entityId"`
+ LanguageCode string `json:"languageCode"`
+ Target string `json:"target"`
+ Value string `json:"value"`
+ }
+
+ if err := json.Unmarshal(reqJson, &req); err != nil {
+ return nil, err
+ }
+ return nil, captionMap.SetOne_tx(ctx, tx, req.Target,
+ req.EntityId, req.Content, req.LanguageCode, req.Value)
+}
diff --git a/request/request_clientEvent.go b/request/request_clientEvent.go
new file mode 100644
index 00000000..8f4ef39d
--- /dev/null
+++ b/request/request_clientEvent.go
@@ -0,0 +1,129 @@
+package request
+
+import (
+ "context"
+ "encoding/json"
+ "fmt"
+ "r3/cache"
+ "r3/cluster"
+ "r3/handler"
+ "r3/login/login_clientEvent"
+ "r3/schema/clientEvent"
+ "r3/types"
+ "strings"
+
+ "github.com/gofrs/uuid"
+ "github.com/jackc/pgx/v5"
+)
+
+func clientEventDel_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+
+ var req struct {
+ Id uuid.UUID `json:"id"`
+ }
+ if err := json.Unmarshal(reqJson, &req); err != nil {
+ return nil, err
+ }
+ return nil, clientEvent.Del_tx(ctx, tx, req.Id)
+}
+
+func clientEventSet_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+
+ var req types.ClientEvent
+ if err := json.Unmarshal(reqJson, &req); err != nil {
+ return nil, err
+ }
+ return nil, clientEvent.Set_tx(ctx, tx, req)
+}
+
+// fat client requests
+func clientEventGetFatClient_tx(ctx context.Context, tx pgx.Tx, loginId int64) (interface{}, error) {
+
+ var err error
+ var res struct {
+ ClientEvents []types.ClientEvent `json:"clientEvents"`
+ ClientEventIdMapLogin map[uuid.UUID]types.LoginClientEvent `json:"clientEventIdMapLogin"`
+ }
+ res.ClientEvents = make([]types.ClientEvent, 0)
+ res.ClientEventIdMapLogin = make(map[uuid.UUID]types.LoginClientEvent)
+
+ // collect login client events for login (currently only used to enable and overwrite hotkeys)
+ res.ClientEventIdMapLogin, err = login_clientEvent.Get_tx(ctx, tx, loginId)
+ if err != nil {
+ return nil, err
+ }
+
+ // collect client events for login
+ access, err := cache.GetAccessById(loginId)
+ if err != nil {
+ return nil, err
+ }
+
+ cache.Schema_mx.RLock()
+ for id, ce := range cache.ClientEventIdMap {
+ if _, exists := access.ClientEvent[id]; !exists {
+ continue // login has no access, ignore
+ }
+ if ce.Event == "onHotkey" {
+ if _, exists := res.ClientEventIdMapLogin[id]; !exists {
+ continue // login has not enabled hotkey, ignore
+ }
+ }
+ res.ClientEvents = append(res.ClientEvents, ce)
+ }
+ cache.Schema_mx.RUnlock()
+ return res, nil
+}
+func clientEventExecFatClient_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage, loginId int64, address string) (interface{}, error) {
+
+ var req struct {
+ Id uuid.UUID `json:"id"`
+ Arguments []interface{} `json:"arguments"`
+ }
+ if err := json.Unmarshal(reqJson, &req); err != nil {
+ return nil, err
+ }
+
+ cache.Schema_mx.RLock()
+ ce, exists := cache.ClientEventIdMap[req.Id]
+ cache.Schema_mx.RUnlock()
+
+ if !exists {
+ return nil, handler.ErrSchemaUnknownClientEvent(req.Id)
+ }
+
+ // execute valid actions
+ if ce.Action == "callJsFunction" && ce.JsFunctionId.Valid {
+ return nil, cluster.JsFunctionCalled_tx(ctx, tx, true, address, loginId, ce.ModuleId, ce.JsFunctionId.Bytes, req.Arguments)
+ }
+ if ce.Action == "callPgFunction" && ce.PgFunctionId.Valid {
+
+ cache.Schema_mx.RLock()
+ fnc, exists := cache.PgFunctionIdMap[ce.PgFunctionId.Bytes]
+ cache.Schema_mx.RUnlock()
+
+ if !exists {
+ return nil, handler.ErrSchemaUnknownPgFunction(ce.PgFunctionId.Bytes)
+ }
+ if fnc.IsTrigger {
+ return nil, handler.ErrSchemaTriggerPgFunctionCall(ce.PgFunctionId.Bytes)
+ }
+
+ cache.Schema_mx.RLock()
+ mod := cache.ModuleIdMap[fnc.ModuleId]
+ cache.Schema_mx.RUnlock()
+
+ placeholders := make([]string, 0)
+ for i := range req.Arguments {
+ placeholders = append(placeholders, fmt.Sprintf("$%d", i+1))
+ }
+
+ var returnIf interface{}
+ err := tx.QueryRow(ctx, fmt.Sprintf(`SELECT "%s"."%s"(%s)`, mod.Name, fnc.Name, strings.Join(placeholders, ",")),
+ req.Arguments...).Scan(&returnIf)
+
+ return nil, err
+ }
+
+ return nil, fmt.Errorf("invalid client event action")
+}
diff --git a/request/request_cluster.go b/request/request_cluster.go
index bb1828be..bcbea7d2 100644
--- a/request/request_cluster.go
+++ b/request/request_cluster.go
@@ -1,14 +1,16 @@
package request
import (
+ "context"
"encoding/json"
"r3/cluster"
+ "r3/types"
"github.com/gofrs/uuid"
"github.com/jackc/pgx/v5"
)
-func ClusterNodeDel_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+func ClusterNodeDel_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var req struct {
Id uuid.UUID `json:"id"`
@@ -16,14 +18,14 @@ func ClusterNodeDel_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error)
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- return nil, cluster.DelNode_tx(tx, req.Id)
+ return nil, cluster.DelNode_tx(ctx, tx, req.Id)
}
-func ClusterNodesGet() (interface{}, error) {
- return cluster.GetNodes()
+func ClusterNodesGet_tx(ctx context.Context, tx pgx.Tx) (interface{}, error) {
+ return cluster.GetNodes_tx(ctx, tx)
}
-func ClusterNodeSet_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+func ClusterNodeSet_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var req struct {
Id uuid.UUID `json:"id"`
@@ -32,10 +34,10 @@ func ClusterNodeSet_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error)
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- return nil, cluster.SetNode_tx(tx, req.Id, req.Name)
+ return nil, cluster.SetNode_tx(ctx, tx, req.Id, req.Name)
}
-func ClusterNodeShutdown(reqJson json.RawMessage) (interface{}, error) {
+func ClusterNodeShutdown_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var req struct {
Id uuid.UUID `json:"id"`
@@ -43,5 +45,6 @@ func ClusterNodeShutdown(reqJson json.RawMessage) (interface{}, error) {
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- return nil, cluster.CreateEventForNode(req.Id, "shutdownTriggered", "{}")
+ return nil, cluster.CreateEventForNodes_tx(ctx, tx, []uuid.UUID{req.Id},
+ "shutdownTriggered", "{}", types.ClusterEventTarget{})
}
diff --git a/request/request_collection.go b/request/request_collection.go
index 61d14d96..99c2590e 100644
--- a/request/request_collection.go
+++ b/request/request_collection.go
@@ -1,6 +1,7 @@
package request
import (
+ "context"
"encoding/json"
"r3/schema/collection"
"r3/types"
@@ -9,7 +10,7 @@ import (
"github.com/jackc/pgx/v5"
)
-func CollectionDel_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+func CollectionDel_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var req struct {
Id uuid.UUID `json:"id"`
@@ -18,16 +19,16 @@ func CollectionDel_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- return nil, collection.Del_tx(tx, req.Id)
+ return nil, collection.Del_tx(ctx, tx, req.Id)
}
-func CollectionSet_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+func CollectionSet_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var req types.Collection
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- return nil, collection.Set_tx(tx, req.ModuleId, req.Id,
+ return nil, collection.Set_tx(ctx, tx, req.ModuleId, req.Id,
req.IconId, req.Name, req.Columns, req.Query, req.InHeader)
}
diff --git a/request/request_config.go b/request/request_config.go
index 1c86cce7..8d7b1b40 100644
--- a/request/request_config.go
+++ b/request/request_config.go
@@ -1,11 +1,12 @@
package request
import (
+ "context"
"encoding/json"
"fmt"
"r3/cluster"
"r3/config"
- "r3/tools"
+ "slices"
"strconv"
"github.com/jackc/pgx/v5"
@@ -20,7 +21,7 @@ func ConfigGet() (interface{}, error) {
for _, name := range config.NamesString {
- if tools.StringInSlice(name, ignore) {
+ if slices.Contains(ignore, name) {
continue
}
res[name] = config.GetString(name)
@@ -28,15 +29,27 @@ func ConfigGet() (interface{}, error) {
for _, name := range config.NamesUint64 {
- if tools.StringInSlice(name, ignore) {
+ if slices.Contains(ignore, name) {
continue
}
res[name] = fmt.Sprintf("%d", config.GetUint64(name))
}
+
+ for _, name := range config.NamesUint64Slice {
+
+ if slices.Contains(ignore, name) {
+ continue
+ }
+ json, err := json.Marshal(config.GetUint64Slice(name))
+ if err != nil {
+ return nil, err
+ }
+ res[name] = string(json)
+ }
return res, nil
}
-func ConfigSet_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+func ConfigSet_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var req map[string]string
if err := json.Unmarshal(reqJson, &req); err != nil {
@@ -44,31 +57,42 @@ func ConfigSet_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
}
// check for config changes that have specific consequences
- switchToMaintenance := false
+ productionModeChange := false
if value, exists := req["productionMode"]; exists &&
value != strconv.FormatInt(int64(config.GetUint64("productionMode")), 10) {
- switchToMaintenance = true
+ productionModeChange = true
}
// update config values in DB and local config store
for name, value := range req {
- if tools.StringInSlice(name, config.NamesString) {
- if err := config.SetString_tx(tx, name, value); err != nil {
+ if slices.Contains(config.NamesString, name) {
+ if err := config.SetString_tx(ctx, tx, name, value); err != nil {
return nil, err
}
- } else if tools.StringInSlice(name, config.NamesUint64) {
+ } else if slices.Contains(config.NamesUint64, name) {
val, err := strconv.ParseUint(value, 10, 64)
if err != nil {
return nil, err
}
- if err := config.SetUint64_tx(tx, name, val); err != nil {
+ if err := config.SetUint64_tx(ctx, tx, name, val); err != nil {
+ return nil, err
+ }
+
+ } else if slices.Contains(config.NamesUint64Slice, name) {
+
+ var val []uint64
+ if err := json.Unmarshal([]byte(value), &val); err != nil {
+ return nil, err
+ }
+
+ if err := config.SetUint64Slice_tx(ctx, tx, name, val); err != nil {
return nil, err
}
}
}
- return nil, cluster.ConfigChanged(true, false, switchToMaintenance)
+ return nil, cluster.ConfigChanged_tx(ctx, tx, true, false, productionModeChange)
}
diff --git a/request/request_event.go b/request/request_event.go
new file mode 100644
index 00000000..3f8714ff
--- /dev/null
+++ b/request/request_event.go
@@ -0,0 +1,88 @@
+package request
+
+import (
+ "context"
+ "encoding/json"
+ "fmt"
+ "r3/cluster"
+ "r3/schema"
+ "strings"
+
+ "github.com/gofrs/uuid"
+ "github.com/jackc/pgx/v5"
+ "github.com/jackc/pgx/v5/pgtype"
+)
+
+// requests for browser clients
+func eventFilesCopied_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage, loginId int64, address string) (interface{}, error) {
+ // request file(s) to be copied (synchronized across all browser clients)
+ var req struct {
+ AttributeId uuid.UUID `json:"attributeId"`
+ FileIds []uuid.UUID `json:"fileIds"`
+ RecordId int64 `json:"recordId"`
+ }
+ if err := json.Unmarshal(reqJson, &req); err != nil {
+ return nil, err
+ }
+ return nil, cluster.FilesCopied_tx(ctx, tx, true, address, loginId, req.AttributeId, req.FileIds, req.RecordId)
+}
+
+// requests for fat clients
+func eventClientEventsChanged_tx(ctx context.Context, tx pgx.Tx, loginId int64, address string) (interface{}, error) {
+ return nil, cluster.ClientEventsChanged_tx(ctx, tx, true, address, loginId)
+}
+func eventFileRequested_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage, loginId int64, address string) (interface{}, error) {
+ var req struct {
+ AttributeId uuid.UUID `json:"attributeId"`
+ FileId uuid.UUID `json:"fileId"`
+ RecordId int64 `json:"recordId"`
+ ChooseApp bool `json:"chooseApp"`
+ }
+
+ if err := json.Unmarshal(reqJson, &req); err != nil {
+ return nil, err
+ }
+
+ // get current file name and latest hash
+ // files before 3.1 do not have a hash value, empty hash is then compared against new file version hash
+ var hash pgtype.Text
+ var name string
+ if err := tx.QueryRow(ctx, fmt.Sprintf(`
+ SELECT v.hash, r.name
+ FROM instance.file_version AS v
+ JOIN instance_file."%s" AS r
+ ON r.file_id = v.file_id
+ AND r.record_id = $1
+ WHERE v.file_id = $2
+ ORDER BY v.version DESC
+ LIMIT 1
+ `, schema.GetFilesTableName(req.AttributeId)),
+ req.RecordId, req.FileId).Scan(&hash, &name); err != nil {
+ return nil, err
+ }
+
+ // compatibility fix
+ // we currently allow many special characters in file names, some are invalid in general (? & @), others are valid but must be escaped in URL (like #)
+ // file names are not escaped by r3 client in the download URL, this will cause download to fail
+ name = strings.NewReplacer(
+ "#", "",
+ "=", "",
+ "@", "",
+ "?", "",
+ ":", "",
+ ";", "",
+ "/", "",
+ "\\", "",
+ "&", "").Replace(name)
+
+ return nil, cluster.FileRequested_tx(ctx, tx, true, address, loginId,
+ req.AttributeId, req.FileId, hash.String, name, req.ChooseApp)
+}
+func eventKeystrokesRequested_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage, loginId int64, address string) (interface{}, error) {
+ var keystrokes string
+
+ if err := json.Unmarshal(reqJson, &keystrokes); err != nil {
+ return nil, err
+ }
+ return nil, cluster.KeystrokesRequested_tx(ctx, tx, true, address, loginId, keystrokes)
+}
diff --git a/request/request_feedback.go b/request/request_feedback.go
index d1649b43..f6cf0402 100644
--- a/request/request_feedback.go
+++ b/request/request_feedback.go
@@ -5,10 +5,9 @@ import (
"r3/repo"
"github.com/jackc/pgx/v5/pgtype"
- "github.com/jackc/pgx/v5"
)
-func FeedbackSend_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+func FeedbackSend(reqJson json.RawMessage) (interface{}, error) {
var req struct {
Code int `json:"code"`
diff --git a/request/request_field.go b/request/request_field.go
index 225cc1ba..45971333 100644
--- a/request/request_field.go
+++ b/request/request_field.go
@@ -1,6 +1,7 @@
package request
import (
+ "context"
"encoding/json"
"r3/schema/field"
@@ -8,7 +9,7 @@ import (
"github.com/jackc/pgx/v5"
)
-func FieldDel_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+func FieldDel_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var req struct {
Id uuid.UUID `json:"id"`
@@ -17,8 +18,5 @@ func FieldDel_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- if err := field.Del_tx(tx, req.Id); err != nil {
- return nil, err
- }
- return nil, nil
+ return nil, field.Del_tx(ctx, tx, req.Id)
}
diff --git a/request/request_file.go b/request/request_file.go
index e883916e..a96412b9 100644
--- a/request/request_file.go
+++ b/request/request_file.go
@@ -1,33 +1,16 @@
package request
import (
+ "context"
"encoding/json"
- "fmt"
- "r3/cluster"
"r3/data"
- "r3/db"
- "r3/schema"
"github.com/gofrs/uuid"
- "github.com/jackc/pgx/v5/pgtype"
+ "github.com/jackc/pgx/v5"
)
-// request file(s) to be copied (synchronized across all clients for login)
-func FilesCopy(reqJson json.RawMessage, loginId int64) (interface{}, error) {
- var req struct {
- AttributeId uuid.UUID `json:"attributeId"`
- FileIds []uuid.UUID `json:"fileIds"`
- RecordId int64 `json:"recordId"`
- }
- if err := json.Unmarshal(reqJson, &req); err != nil {
- return nil, err
- }
- return nil, cluster.FilesCopied(true, loginId,
- req.AttributeId, req.FileIds, req.RecordId)
-}
-
// request file(s) to be pasted
-func FilesPaste(reqJson json.RawMessage, loginId int64) (interface{}, error) {
+func filesPaste_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage, loginId int64) (interface{}, error) {
var req struct {
SrcAttributeId uuid.UUID `json:"srcAttributeId"`
SrcFileIds []uuid.UUID `json:"srcFileIds"`
@@ -37,41 +20,5 @@ func FilesPaste(reqJson json.RawMessage, loginId int64) (interface{}, error) {
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- return data.CopyFiles(loginId, req.SrcAttributeId,
- req.SrcFileIds, req.SrcRecordId, req.DstAttributeId)
-}
-
-// request file to be opened by fat client
-func FileRequest(reqJson json.RawMessage, loginId int64) (interface{}, error) {
- var req struct {
- AttributeId uuid.UUID `json:"attributeId"`
- FileId uuid.UUID `json:"fileId"`
- RecordId int64 `json:"recordId"`
- ChooseApp bool `json:"chooseApp"`
- }
-
- if err := json.Unmarshal(reqJson, &req); err != nil {
- return nil, err
- }
-
- // get current file name and latest hash
- // files before 3.1 do not have a hash value, empty hash is then compared against new file version hash
- var hash pgtype.Text
- var name string
- if err := db.Pool.QueryRow(db.Ctx, fmt.Sprintf(`
- SELECT v.hash, r.name
- FROM instance.file_version AS v
- JOIN instance_file."%s" AS r
- ON r.file_id = v.file_id
- AND r.record_id = $1
- WHERE v.file_id = $2
- ORDER BY v.version DESC
- LIMIT 1
- `, schema.GetFilesTableName(req.AttributeId)),
- req.RecordId, req.FileId).Scan(&hash, &name); err != nil {
- return nil, err
- }
-
- return nil, cluster.FileRequested(true, loginId,
- req.AttributeId, req.FileId, hash.String, name, req.ChooseApp)
+ return data.CopyFiles_tx(ctx, tx, loginId, req.SrcAttributeId, req.SrcFileIds, req.SrcRecordId, req.DstAttributeId)
}
diff --git a/request/request_file_admin.go b/request/request_file_admin.go
index c16399cc..6f6bddf4 100644
--- a/request/request_file_admin.go
+++ b/request/request_file_admin.go
@@ -1,17 +1,18 @@
package request
import (
+ "context"
"encoding/json"
"fmt"
- "r3/db"
"r3/schema"
"github.com/gofrs/uuid"
+ "github.com/jackc/pgx/v5"
"github.com/jackc/pgx/v5/pgtype"
)
// returns deleted or unassigned files
-func FileGet() (interface{}, error) {
+func FileGet_tx(ctx context.Context, tx pgx.Tx) (interface{}, error) {
type file struct {
Id uuid.UUID `json:"id"`
Name string `json:"name"`
@@ -26,7 +27,7 @@ func FileGet() (interface{}, error) {
res.AttributeIdMapDeleted = make(map[uuid.UUID][]file)
attributeIdsFile := make([]uuid.UUID, 0)
- if err := db.Pool.QueryRow(db.Ctx, `
+ if err := tx.QueryRow(ctx, `
SELECT ARRAY_AGG(id)
FROM app.attribute
WHERE content = 'files'
@@ -38,7 +39,7 @@ func FileGet() (interface{}, error) {
// if file is assigned to multiple records, return all
// files without record assignment are just deleted in cleanup, not retrieved here
for _, atrId := range attributeIdsFile {
- rows, err := db.Pool.Query(db.Ctx, fmt.Sprintf(`
+ rows, err := tx.Query(ctx, fmt.Sprintf(`
SELECT file_id, name, date_delete, record_id, (
SELECT v.size_kb
FROM instance.file_version AS v
@@ -56,6 +57,7 @@ func FileGet() (interface{}, error) {
for rows.Next() {
var f file
if err := rows.Scan(&f.Id, &f.Name, &f.Deleted, &f.RecordId, &f.Size); err != nil {
+ rows.Close()
return nil, err
}
@@ -71,7 +73,7 @@ func FileGet() (interface{}, error) {
// removed deletion state from file
// file must still be assigned to a record to be restored to its file attribute
-func FileRestore(reqJson json.RawMessage) (interface{}, error) {
+func FileRestore_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var req struct {
AttributeId uuid.UUID `json:"attributeId"`
FileId uuid.UUID `json:"fileId"`
@@ -81,7 +83,7 @@ func FileRestore(reqJson json.RawMessage) (interface{}, error) {
return nil, err
}
- _, err := db.Pool.Exec(db.Ctx, fmt.Sprintf(`
+ _, err := tx.Exec(ctx, fmt.Sprintf(`
UPDATE instance_file."%s"
SET date_delete = NULL
WHERE file_id = $1
diff --git a/request/request_form.go b/request/request_form.go
index adaabd57..adeb84ec 100644
--- a/request/request_form.go
+++ b/request/request_form.go
@@ -1,6 +1,7 @@
package request
import (
+ "context"
"encoding/json"
"r3/schema/form"
"r3/types"
@@ -9,7 +10,7 @@ import (
"github.com/jackc/pgx/v5"
)
-func FormCopy_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+func FormCopy_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var req struct {
Id uuid.UUID `json:"id"`
@@ -20,10 +21,10 @@ func FormCopy_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- return nil, form.Copy_tx(tx, req.ModuleId, req.Id, req.NewName)
+ return nil, form.Copy_tx(ctx, tx, req.ModuleId, req.Id, req.NewName)
}
-func FormDel_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+func FormDel_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var req struct {
Id uuid.UUID `json:"id"`
@@ -32,38 +33,14 @@ func FormDel_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- return nil, form.Del_tx(tx, req.Id)
+ return nil, form.Del_tx(ctx, tx, req.Id)
}
-func FormGet(reqJson json.RawMessage) (interface{}, error) {
-
- var (
- err error
- req struct {
- ModuleId uuid.UUID `json:"moduleId"`
- }
- res struct {
- Forms []types.Form `json:"forms"`
- }
- )
-
- if err := json.Unmarshal(reqJson, &req); err != nil {
- return nil, err
- }
- res.Forms, err = form.Get(req.ModuleId, []uuid.UUID{})
- if err != nil {
- return nil, err
- }
- return res, nil
-}
-
-func FormSet_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+func FormSet_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var req types.Form
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- return nil, form.Set_tx(tx, req.ModuleId, req.Id, req.PresetIdOpen,
- req.IconId, req.Name, req.NoDataActions, req.Query, req.Fields,
- req.Functions, req.States, req.ArticleIdsHelp, req.Captions)
+ return nil, form.Set_tx(ctx, tx, req)
}
diff --git a/request/request_icon.go b/request/request_icon.go
index ba9c24f4..41d34c37 100644
--- a/request/request_icon.go
+++ b/request/request_icon.go
@@ -1,6 +1,7 @@
package request
import (
+ "context"
"encoding/json"
"r3/schema/icon"
@@ -8,17 +9,17 @@ import (
"github.com/jackc/pgx/v5"
)
-func IconDel_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+func IconDel_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var req struct {
Id uuid.UUID `json:"id"`
}
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- return nil, icon.Del_tx(tx, req.Id)
+ return nil, icon.Del_tx(ctx, tx, req.Id)
}
-func IconSetName_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+func IconSetName_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var req struct {
Id uuid.UUID `json:"id"`
ModuleId uuid.UUID `json:"moduleId"`
@@ -27,5 +28,5 @@ func IconSetName_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- return nil, icon.SetName_tx(tx, req.ModuleId, req.Id, req.Name)
+ return nil, icon.SetName_tx(ctx, tx, req.ModuleId, req.Id, req.Name)
}
diff --git a/request/request_jsFunction.go b/request/request_jsFunction.go
index 50e70e9a..9b4e02bc 100644
--- a/request/request_jsFunction.go
+++ b/request/request_jsFunction.go
@@ -1,6 +1,7 @@
package request
import (
+ "context"
"encoding/json"
"r3/schema/jsFunction"
"r3/types"
@@ -9,7 +10,7 @@ import (
"github.com/jackc/pgx/v5"
)
-func JsFunctionDel_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+func JsFunctionDel_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var req struct {
Id uuid.UUID `json:"id"`
@@ -18,27 +19,14 @@ func JsFunctionDel_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- return nil, jsFunction.Del_tx(tx, req.Id)
+ return nil, jsFunction.Del_tx(ctx, tx, req.Id)
}
-func JsFunctionGet(reqJson json.RawMessage) (interface{}, error) {
-
- var req struct {
- ModuleId uuid.UUID `json:"moduleId"`
- }
-
- if err := json.Unmarshal(reqJson, &req); err != nil {
- return nil, err
- }
- return jsFunction.Get(req.ModuleId)
-}
-
-func JsFunctionSet_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+func JsFunctionSet_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var req types.JsFunction
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- return nil, jsFunction.Set_tx(tx, req.ModuleId, req.Id, req.FormId,
- req.Name, req.CodeArgs, req.CodeFunction, req.CodeReturns, req.Captions)
+ return nil, jsFunction.Set_tx(ctx, tx, req)
}
diff --git a/request/request_ldap.go b/request/request_ldap.go
index 5c23f2a5..2dce4bc6 100644
--- a/request/request_ldap.go
+++ b/request/request_ldap.go
@@ -1,16 +1,16 @@
package request
import (
+ "context"
"encoding/json"
"r3/ldap"
"r3/ldap/ldap_check"
- "r3/ldap/ldap_import"
"r3/types"
"github.com/jackc/pgx/v5"
)
-func LdapDel_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+func LdapDel_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var req struct {
Id int32 `json:"id"`
}
@@ -18,32 +18,20 @@ func LdapDel_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- return nil, ldap.Del_tx(tx, req.Id)
+ return nil, ldap.Del_tx(ctx, tx, req.Id)
}
-func LdapGet() (interface{}, error) {
- return ldap.Get()
+func LdapGet_tx(ctx context.Context, tx pgx.Tx) (interface{}, error) {
+ return ldap.Get_tx(ctx, tx)
}
-func LdapSet_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+func LdapSet_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var req types.Ldap
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- return nil, ldap.Set_tx(tx, req)
-}
-
-func LdapImport(reqJson json.RawMessage) (interface{}, error) {
-
- var req struct {
- Id int32 `json:"id"`
- }
-
- if err := json.Unmarshal(reqJson, &req); err != nil {
- return nil, err
- }
- return nil, ldap_import.Run(req.Id)
+ return nil, ldap.Set_tx(ctx, tx, req)
}
func LdapCheck(reqJson json.RawMessage) (interface{}, error) {
diff --git a/request/request_license.go b/request/request_license.go
new file mode 100644
index 00000000..ef623209
--- /dev/null
+++ b/request/request_license.go
@@ -0,0 +1,19 @@
+package request
+
+import (
+ "context"
+ "r3/cluster"
+ "r3/config"
+
+ "github.com/jackc/pgx/v5"
+)
+
+func LicenseDel_tx(ctx context.Context, tx pgx.Tx) (interface{}, error) {
+ if err := config.SetString_tx(ctx, tx, "licenseFile", ""); err != nil {
+ return nil, err
+ }
+ if err := cluster.ConfigChanged_tx(ctx, tx, true, false, false); err != nil {
+ return nil, err
+ }
+ return nil, nil
+}
diff --git a/request/request_log.go b/request/request_log.go
index 83273629..386699e9 100644
--- a/request/request_log.go
+++ b/request/request_log.go
@@ -1,14 +1,16 @@
package request
import (
+ "context"
"encoding/json"
"r3/log"
"r3/types"
+ "github.com/jackc/pgx/v5"
"github.com/jackc/pgx/v5/pgtype"
)
-func LogGet(reqJson json.RawMessage) (interface{}, error) {
+func LogGet_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var (
err error
@@ -29,7 +31,7 @@ func LogGet(reqJson json.RawMessage) (interface{}, error) {
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- res.Logs, res.Total, err = log.Get(req.DateFrom, req.DateTo,
+ res.Logs, res.Total, err = log.Get_tx(ctx, tx, req.DateFrom, req.DateTo,
req.Limit, req.Offset, req.Context, req.ByString)
return res, err
diff --git a/request/request_login.go b/request/request_login.go
index ef50cfb2..755782a9 100644
--- a/request/request_login.go
+++ b/request/request_login.go
@@ -1,10 +1,12 @@
package request
import (
+ "context"
"encoding/base32"
"encoding/json"
"r3/cluster"
"r3/login"
+ "r3/login/login_meta"
"r3/types"
"github.com/gofrs/uuid"
@@ -13,7 +15,7 @@ import (
)
// user requests
-func LoginGetNames(reqJson json.RawMessage) (interface{}, error) {
+func LoginGetNames_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var req struct {
ByString string `json:"byString"`
@@ -25,21 +27,21 @@ func LoginGetNames(reqJson json.RawMessage) (interface{}, error) {
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- return login.GetNames(req.Id, req.IdsExclude, req.ByString, req.NoLdapAssign)
+ return login.GetNames_tx(ctx, tx, req.Id, req.IdsExclude, req.ByString, req.NoLdapAssign)
}
-func LoginDelTokenFixed(reqJson json.RawMessage, loginId int64) (interface{}, error) {
+func LoginDelTokenFixed_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage, loginId int64) (interface{}, error) {
var req struct {
Id int64 `json:"id"`
}
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- return nil, login.DelTokenFixed(loginId, req.Id)
+ return nil, login.DelTokenFixed_tx(ctx, tx, loginId, req.Id)
}
-func LoginGetTokensFixed(loginId int64) (interface{}, error) {
- return login.GetTokensFixed(loginId)
+func LoginGetTokensFixed_tx(ctx context.Context, tx pgx.Tx, loginId int64) (interface{}, error) {
+ return login.GetTokensFixed_tx(ctx, tx, loginId)
}
-func LoginSetTokenFixed_tx(tx pgx.Tx, reqJson json.RawMessage, loginId int64) (interface{}, error) {
+func LoginSetTokenFixed_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage, loginId int64) (interface{}, error) {
var (
err error
@@ -56,14 +58,14 @@ func LoginSetTokenFixed_tx(tx pgx.Tx, reqJson json.RawMessage, loginId int64) (i
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- res.TokenFixed, err = login.SetTokenFixed_tx(tx, loginId, req.Name, req.Context)
+ res.TokenFixed, err = login.SetTokenFixed_tx(ctx, tx, loginId, req.Name, req.Context)
res.TokenFixedB32 = base32.StdEncoding.WithPadding(base32.NoPadding).EncodeToString([]byte(res.TokenFixed))
return res, err
}
// admin requests
-func LoginDel_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+func LoginDel_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var req struct {
Id int64 `json:"id"`
@@ -71,9 +73,9 @@ func LoginDel_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- return nil, login.Del_tx(tx, req.Id)
+ return nil, login.Del_tx(ctx, tx, req.Id)
}
-func LoginGet(reqJson json.RawMessage) (interface{}, error) {
+func LoginGet_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var (
req struct {
@@ -81,6 +83,10 @@ func LoginGet(reqJson json.RawMessage) (interface{}, error) {
ByString string `json:"byString"`
Limit int `json:"limit"`
Offset int `json:"offset"`
+ OrderAsc bool `json:"orderAsc"`
+ OrderBy string `json:"orderBy"`
+ Meta bool `json:"meta"`
+ Roles bool `json:"roles"`
RecordRequests []types.LoginAdminRecordGet `json:"recordRequests"`
}
res struct {
@@ -93,12 +99,23 @@ func LoginGet(reqJson json.RawMessage) (interface{}, error) {
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- res.Logins, res.Total, err = login.Get(req.ById, req.ByString,
- req.Limit, req.Offset, req.RecordRequests)
+ res.Logins, res.Total, err = login.Get_tx(ctx, tx, req.ById, req.ByString, req.OrderBy,
+ req.OrderAsc, req.Limit, req.Offset, req.Meta, req.Roles, req.RecordRequests)
return res, err
}
-func LoginGetMembers(reqJson json.RawMessage) (interface{}, error) {
+func LoginGetIsNotUnique_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+ var req struct {
+ LoginId int64 `json:"loginId"`
+ Content string `json:"content"`
+ Value string `json:"value"`
+ }
+ if err := json.Unmarshal(reqJson, &req); err != nil {
+ return nil, err
+ }
+ return login_meta.GetIsNotUnique_tx(ctx, tx, req.LoginId, req.Content, req.Value)
+}
+func LoginGetMembers_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var (
err error
@@ -114,13 +131,13 @@ func LoginGetMembers(reqJson json.RawMessage) (interface{}, error) {
return nil, err
}
- res.Logins, err = login.GetByRole(req.RoleId)
+ res.Logins, err = login.GetByRole_tx(ctx, tx, req.RoleId)
if err != nil {
return nil, err
}
return res, nil
}
-func LoginGetRecords(reqJson json.RawMessage) (interface{}, error) {
+func LoginGetRecords_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var req struct {
AttributeIdLookup uuid.UUID `json:"attributeIdLookup"`
@@ -132,32 +149,34 @@ func LoginGetRecords(reqJson json.RawMessage) (interface{}, error) {
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- return login.GetRecords(req.AttributeIdLookup, req.IdsExclude, req.ById, req.ByString)
+ return login.GetRecords_tx(ctx, tx, req.AttributeIdLookup, req.IdsExclude, req.ById, req.ByString)
}
-func LoginSet_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+func LoginSet_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var req struct {
- Id int64 `json:"id"`
- LdapId pgtype.Int4 `json:"ldapId"`
- LdapKey pgtype.Text `json:"ldapKey"`
- Name string `json:"name"`
- Pass string `json:"pass"`
- Active bool `json:"active"`
- Admin bool `json:"admin"`
- NoAuth bool `json:"noAuth"`
- RoleIds []uuid.UUID `json:"roleIds"`
- Records []types.LoginAdminRecordSet `json:"records"`
- TemplateId pgtype.Int8 `json:"templateId"`
+ Id int64 `json:"id"`
+ LdapId pgtype.Int4 `json:"ldapId"`
+ LdapKey pgtype.Text `json:"ldapKey"`
+ Name string `json:"name"`
+ Pass string `json:"pass"`
+ Active bool `json:"active"`
+ Admin bool `json:"admin"`
+ NoAuth bool `json:"noAuth"`
+ TokenExpiryHours pgtype.Int4 `json:"tokenExpiryHours"`
+ Meta types.LoginMeta `json:"meta"`
+ RoleIds []uuid.UUID `json:"roleIds"`
+ Records []types.LoginAdminRecordSet `json:"records"`
+ TemplateId pgtype.Int8 `json:"templateId"`
}
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- return login.Set_tx(tx, req.Id, req.TemplateId, req.LdapId, req.LdapKey,
- req.Name, req.Pass, req.Admin, req.NoAuth, req.Active, req.RoleIds,
- req.Records)
+ return login.Set_tx(ctx, tx, req.Id, req.TemplateId, req.LdapId, req.LdapKey,
+ req.Name, req.Pass, req.Admin, req.NoAuth, req.Active, req.TokenExpiryHours,
+ req.Meta, req.RoleIds, req.Records)
}
-func LoginSetMembers_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+func LoginSetMembers_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var req struct {
RoleId uuid.UUID `json:"roleId"`
@@ -167,9 +186,9 @@ func LoginSetMembers_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error)
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- return nil, login.SetRoleLoginIds_tx(tx, req.RoleId, req.LoginIds)
+ return nil, login.SetRoleLoginIds_tx(ctx, tx, req.RoleId, req.LoginIds)
}
-func LoginKick(reqJson json.RawMessage) (interface{}, error) {
+func LoginKick(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var req struct {
Id int64 `json:"id"`
@@ -178,9 +197,9 @@ func LoginKick(reqJson json.RawMessage) (interface{}, error) {
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- return nil, cluster.LoginDisabled(true, req.Id)
+ return nil, cluster.LoginDisabled_tx(ctx, tx, true, req.Id)
}
-func LoginReauth(reqJson json.RawMessage) (interface{}, error) {
+func LoginReauth_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var req struct {
Id int64 `json:"id"`
@@ -189,17 +208,17 @@ func LoginReauth(reqJson json.RawMessage) (interface{}, error) {
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- return nil, cluster.LoginReauthorized(true, req.Id)
+ return nil, cluster.LoginReauthorized_tx(ctx, tx, true, req.Id)
}
-func LoginReauthAll() (interface{}, error) {
- return nil, cluster.LoginReauthorizedAll(true)
+func LoginReauthAll_tx(ctx context.Context, tx pgx.Tx) (interface{}, error) {
+ return nil, cluster.LoginReauthorizedAll_tx(ctx, tx, true)
}
-func LoginResetTotp_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+func LoginResetTotp_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var req struct {
Id int64 `json:"id"`
}
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- return nil, login.ResetTotp_tx(tx, req.Id)
+ return nil, login.ResetTotp_tx(ctx, tx, req.Id)
}
diff --git a/request/request_loginForm.go b/request/request_loginForm.go
deleted file mode 100644
index 9df47086..00000000
--- a/request/request_loginForm.go
+++ /dev/null
@@ -1,55 +0,0 @@
-package request
-
-import (
- "encoding/json"
- "r3/schema/loginForm"
- "r3/types"
-
- "github.com/gofrs/uuid"
- "github.com/jackc/pgx/v5"
-)
-
-func LoginFormDel_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
-
- var req struct {
- Id uuid.UUID `json:"id"`
- }
-
- if err := json.Unmarshal(reqJson, &req); err != nil {
- return nil, err
- }
- if err := loginForm.Del_tx(tx, req.Id); err != nil {
- return nil, err
- }
- return nil, nil
-}
-
-func LoginFormGet(reqJson json.RawMessage) (interface{}, error) {
-
- var (
- err error
- req struct {
- ModuleId uuid.UUID `json:"moduleId"`
- }
- res []types.LoginForm
- )
-
- if err := json.Unmarshal(reqJson, &req); err != nil {
- return nil, err
- }
- res, err = loginForm.Get(req.ModuleId)
- if err != nil {
- return nil, err
- }
- return res, nil
-}
-
-func LoginFormSet_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
- var req types.LoginForm
-
- if err := json.Unmarshal(reqJson, &req); err != nil {
- return nil, err
- }
- return nil, loginForm.Set_tx(tx, req.ModuleId, req.Id, req.AttributeIdLogin,
- req.AttributeIdLookup, req.FormId, req.Name, req.Captions)
-}
diff --git a/request/request_loginAuth.go b/request/request_login_auth.go
similarity index 68%
rename from request/request_loginAuth.go
rename to request/request_login_auth.go
index 977a6cd7..d9537b87 100644
--- a/request/request_loginAuth.go
+++ b/request/request_login_auth.go
@@ -1,6 +1,7 @@
package request
import (
+ "context"
"encoding/json"
"r3/login/login_auth"
"r3/types"
@@ -11,7 +12,7 @@ import (
// attempt login via user credentials
// applies login ID, admin and no auth state to provided parameters if successful
// returns token and success state
-func LoginAuthUser(reqJson json.RawMessage, loginId *int64, admin *bool, noAuth *bool) (interface{}, error) {
+func LoginAuthUser(ctx context.Context, reqJson json.RawMessage, loginId *int64, admin *bool, noAuth *bool) (interface{}, error) {
var (
err error
@@ -20,7 +21,7 @@ func LoginAuthUser(reqJson json.RawMessage, loginId *int64, admin *bool, noAuth
Password string `json:"password"`
// MFA details, sent together with credentials (usually on second auth attempt)
- MfaTokenId pgtype.Int4 `json:"mfaTokenId"`
+ MfaTokenId pgtype.Int4 `json:"mfaTokenId"`
MfaTokenPin pgtype.Text `json:"mfaTokenPin"`
}
res struct {
@@ -38,20 +39,19 @@ func LoginAuthUser(reqJson json.RawMessage, loginId *int64, admin *bool, noAuth
return nil, err
}
- res.Token, res.SaltKdf, res.MfaTokens, err = login_auth.User(req.Username,
- req.Password, req.MfaTokenId, req.MfaTokenPin, loginId, admin, noAuth)
+ res.LoginName, res.Token, res.SaltKdf, res.MfaTokens, err = login_auth.User(
+ ctx, req.Username, req.Password, req.MfaTokenId, req.MfaTokenPin, loginId, admin, noAuth)
if err != nil {
return nil, err
}
res.LoginId = *loginId
- res.LoginName = req.Username
return res, nil
}
// attempt login via JWT
// applies login ID, admin and no auth state to provided parameters if successful
-func LoginAuthToken(reqJson json.RawMessage, loginId *int64, admin *bool, noAuth *bool) (interface{}, error) {
+func LoginAuthToken(ctx context.Context, reqJson json.RawMessage, loginId *int64, admin *bool, noAuth *bool) (interface{}, error) {
var (
err error
@@ -68,7 +68,7 @@ func LoginAuthToken(reqJson json.RawMessage, loginId *int64, admin *bool, noAuth
return nil, err
}
- res.LoginName, err = login_auth.Token(req.Token, loginId, admin, noAuth)
+ res.LoginName, _, err = login_auth.Token(ctx, req.Token, loginId, admin, noAuth)
if err != nil {
return nil, err
}
@@ -78,9 +78,10 @@ func LoginAuthToken(reqJson json.RawMessage, loginId *int64, admin *bool, noAuth
}
// attempt login via fixed token
-func LoginAuthTokenFixed(reqJson json.RawMessage, loginId *int64) (interface{}, error) {
+func LoginAuthTokenFixed(ctx context.Context, reqJson json.RawMessage, loginId *int64) (interface{}, error) {
var (
+ err error
req struct {
LoginId int64 `json:"loginId"`
TokenFixed string `json:"tokenFixed"`
@@ -94,9 +95,11 @@ func LoginAuthTokenFixed(reqJson json.RawMessage, loginId *int64) (interface{},
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- if err := login_auth.TokenFixed(req.LoginId, "client", req.TokenFixed, &res.LanguageCode, &res.Token); err != nil {
+ res.LanguageCode, err = login_auth.TokenFixed(ctx, req.LoginId, "client", req.TokenFixed, &res.Token)
+ if err != nil {
return nil, err
}
+
*loginId = req.LoginId
return res, nil
}
diff --git a/request/request_login_clientEvent.go b/request/request_login_clientEvent.go
new file mode 100644
index 00000000..7ca51808
--- /dev/null
+++ b/request/request_login_clientEvent.go
@@ -0,0 +1,36 @@
+package request
+
+import (
+ "context"
+ "encoding/json"
+ "r3/login/login_clientEvent"
+ "r3/types"
+
+ "github.com/gofrs/uuid"
+ "github.com/jackc/pgx/v5"
+)
+
+func loginClientEventDel_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage, loginId int64) (interface{}, error) {
+ var req struct {
+ ClientEventId uuid.UUID `json:"clientEventId"`
+ }
+ if err := json.Unmarshal(reqJson, &req); err != nil {
+ return nil, err
+ }
+ return nil, login_clientEvent.Del_tx(ctx, tx, loginId, req.ClientEventId)
+}
+
+func loginClientEventGet_tx(ctx context.Context, tx pgx.Tx, loginId int64) (interface{}, error) {
+ return login_clientEvent.Get_tx(ctx, tx, loginId)
+}
+
+func loginClientEventSet_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage, loginId int64) (interface{}, error) {
+ var req struct {
+ ClientEventId uuid.UUID `json:"clientEventId"`
+ LoginClientEvent types.LoginClientEvent `json:"loginClientEvent"`
+ }
+ if err := json.Unmarshal(reqJson, &req); err != nil {
+ return nil, err
+ }
+ return nil, login_clientEvent.Set_tx(ctx, tx, loginId, req.ClientEventId, req.LoginClientEvent)
+}
diff --git a/request/request_login_favorites.go b/request/request_login_favorites.go
new file mode 100644
index 00000000..059bb98c
--- /dev/null
+++ b/request/request_login_favorites.go
@@ -0,0 +1,72 @@
+package request
+
+import (
+ "context"
+ "encoding/json"
+ "r3/login/login_favorites"
+ "r3/login/login_options"
+ "r3/types"
+
+ "github.com/gofrs/uuid"
+ "github.com/jackc/pgx/v5"
+ "github.com/jackc/pgx/v5/pgtype"
+)
+
+func LoginAddFavorites_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage, loginId int64) (interface{}, error) {
+ var req struct {
+ SrcFormId uuid.UUID `json:"srcFormId"` // form that this favorite is created from
+ SrcFavoriteId pgtype.UUID `json:"srcFavoriteId"` // favorite that this favorite is created from (optional)
+ ModuleId uuid.UUID `json:"moduleId"`
+ RecordIdOpen pgtype.Int8 `json:"recordIdOpen"`
+ IsMobile bool `json:"isMobile"`
+ Title pgtype.Text `json:"title"`
+ }
+ if err := json.Unmarshal(reqJson, &req); err != nil {
+ return nil, err
+ }
+
+ id, err := login_favorites.Add_tx(ctx, tx, loginId, req.ModuleId, req.SrcFormId, req.RecordIdOpen, req.Title)
+ if err != nil {
+ return nil, err
+ }
+
+ // copy login options for form for this new favorite
+ return id, login_options.CopyToFavorite_tx(ctx, tx, loginId, req.IsMobile, req.SrcFormId, req.SrcFavoriteId, id)
+}
+
+func LoginGetFavorites_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage, loginId int64, isNoAuth bool) (interface{}, error) {
+ var (
+ err error
+ req struct {
+ DateCache int64 `json:"dateCache"`
+ }
+ res struct {
+ DateCache int64 `json:"dateCache"`
+ ModuleIdMap map[uuid.UUID][]types.LoginFavorite `json:"moduleIdMap"`
+ }
+ )
+ if err := json.Unmarshal(reqJson, &req); err != nil {
+ return nil, err
+ }
+
+ if isNoAuth {
+ // public users cannot store favorites
+ res.DateCache = 0
+ res.ModuleIdMap = make(map[uuid.UUID][]types.LoginFavorite)
+ return res, nil
+ }
+
+ res.ModuleIdMap, res.DateCache, err = login_favorites.Get_tx(ctx, tx, loginId, req.DateCache)
+ if err != nil {
+ return nil, err
+ }
+ return res, nil
+}
+
+func LoginSetFavorites_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage, loginId int64) (interface{}, error) {
+ var req map[uuid.UUID][]types.LoginFavorite
+ if err := json.Unmarshal(reqJson, &req); err != nil {
+ return nil, err
+ }
+ return nil, login_favorites.Set_tx(ctx, tx, loginId, req)
+}
diff --git a/request/request_login_form.go b/request/request_login_form.go
new file mode 100644
index 00000000..29a5a298
--- /dev/null
+++ b/request/request_login_form.go
@@ -0,0 +1,33 @@
+package request
+
+import (
+ "context"
+ "encoding/json"
+ "r3/schema/loginForm"
+ "r3/types"
+
+ "github.com/gofrs/uuid"
+ "github.com/jackc/pgx/v5"
+)
+
+func LoginFormDel_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+
+ var req struct {
+ Id uuid.UUID `json:"id"`
+ }
+
+ if err := json.Unmarshal(reqJson, &req); err != nil {
+ return nil, err
+ }
+ return nil, loginForm.Del_tx(ctx, tx, req.Id)
+}
+
+func LoginFormSet_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+ var req types.LoginForm
+
+ if err := json.Unmarshal(reqJson, &req); err != nil {
+ return nil, err
+ }
+ return nil, loginForm.Set_tx(ctx, tx, req.ModuleId, req.Id, req.AttributeIdLogin,
+ req.AttributeIdLookup, req.FormId, req.Name, req.Captions)
+}
diff --git a/request/request_loginKeys.go b/request/request_login_keys.go
similarity index 54%
rename from request/request_loginKeys.go
rename to request/request_login_keys.go
index e16c8145..08a68507 100644
--- a/request/request_loginKeys.go
+++ b/request/request_login_keys.go
@@ -9,7 +9,7 @@ import (
"github.com/jackc/pgx/v5"
)
-func LoginKeysGetPublic(ctx context.Context, reqJson json.RawMessage) (interface{}, error) {
+func LoginKeysGetPublic_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var req struct {
LoginIds []int64 `json:"loginIds"`
@@ -20,14 +20,14 @@ func LoginKeysGetPublic(ctx context.Context, reqJson json.RawMessage) (interface
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- return login_keys.GetPublic(ctx, req.RelationId, req.RecordIds, req.LoginIds)
+ return login_keys.GetPublic_tx(ctx, tx, req.RelationId, req.RecordIds, req.LoginIds)
}
-func LoginKeysReset_tx(tx pgx.Tx, loginId int64) (interface{}, error) {
- return nil, login_keys.Reset_tx(tx, loginId)
+func LoginKeysReset_tx(ctx context.Context, tx pgx.Tx, loginId int64) (interface{}, error) {
+ return nil, login_keys.Reset_tx(ctx, tx, loginId)
}
-func LoginKeysStore_tx(tx pgx.Tx, reqJson json.RawMessage, loginId int64) (interface{}, error) {
+func LoginKeysStore_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage, loginId int64) (interface{}, error) {
var req struct {
PrivateKeyEnc string `json:"privateKeyEnc"`
@@ -38,11 +38,11 @@ func LoginKeysStore_tx(tx pgx.Tx, reqJson json.RawMessage, loginId int64) (inter
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- return nil, login_keys.Store_tx(tx, loginId,
+ return nil, login_keys.Store_tx(ctx, tx, loginId,
req.PrivateKeyEnc, req.PrivateKeyEncBackup, req.PublicKey)
}
-func LoginKeysStorePrivate_tx(tx pgx.Tx, reqJson json.RawMessage, loginId int64) (interface{}, error) {
+func LoginKeysStorePrivate_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage, loginId int64) (interface{}, error) {
var req struct {
PrivateKeyEnc string `json:"privateKeyEnc"`
@@ -51,5 +51,5 @@ func LoginKeysStorePrivate_tx(tx pgx.Tx, reqJson json.RawMessage, loginId int64)
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- return nil, login_keys.StorePrivate_tx(tx, loginId, req.PrivateKeyEnc)
+ return nil, login_keys.StorePrivate_tx(ctx, tx, loginId, req.PrivateKeyEnc)
}
diff --git a/request/request_login_options.go b/request/request_login_options.go
new file mode 100644
index 00000000..07e789d0
--- /dev/null
+++ b/request/request_login_options.go
@@ -0,0 +1,60 @@
+package request
+
+import (
+ "context"
+ "encoding/json"
+ "r3/login/login_options"
+ "r3/tools"
+ "r3/types"
+
+ "github.com/gofrs/uuid"
+ "github.com/jackc/pgx/v5"
+ "github.com/jackc/pgx/v5/pgtype"
+)
+
+func LoginOptionsGet_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage, loginId int64, isNoAuth bool) (interface{}, error) {
+ var (
+ err error
+ req struct {
+ DateCache int64 `json:"dateCache"`
+ IsMobile bool `json:"isMobile"`
+ }
+ res struct {
+ DateCache int64 `json:"dateCache"`
+ IsMobile bool `json:"isMobile"`
+ Options []types.LoginOptions `json:"options"`
+ }
+ )
+ if err := json.Unmarshal(reqJson, &req); err != nil {
+ return nil, err
+ }
+
+ if isNoAuth {
+ // public users cannot store options
+ res.DateCache = 0
+ res.IsMobile = req.IsMobile
+ res.Options = make([]types.LoginOptions, 0)
+ return res, nil
+ }
+
+ res.DateCache = tools.GetTimeUnix()
+ res.IsMobile = req.IsMobile
+ res.Options, err = login_options.Get_tx(ctx, tx, loginId, req.IsMobile, req.DateCache)
+ if err != nil {
+ return nil, err
+ }
+ return res, nil
+}
+
+func LoginOptionsSet_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage, loginId int64) (interface{}, error) {
+ var req struct {
+ FavoriteId pgtype.UUID `json:"favoriteId"` // NULL if option is for non-favorited form
+ FieldId uuid.UUID `json:"fieldId"`
+ IsMobile bool `json:"isMobile"`
+ Options string `json:"options"`
+ }
+ if err := json.Unmarshal(reqJson, &req); err != nil {
+ return nil, err
+ }
+ return nil, login_options.Set_tx(ctx, tx, loginId, req.FavoriteId, req.FieldId, req.IsMobile, req.Options)
+}
diff --git a/request/request_login_password.go b/request/request_login_password.go
new file mode 100644
index 00000000..6afb25bd
--- /dev/null
+++ b/request/request_login_password.go
@@ -0,0 +1,37 @@
+package request
+
+import (
+ "context"
+ "encoding/json"
+ "fmt"
+ "r3/login"
+ "r3/login/login_check"
+
+ "github.com/jackc/pgx/v5"
+)
+
+func loginPasswortSet_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage, loginId int64) (interface{}, error) {
+
+ var req struct {
+ PwNew0 string `json:"pwNew0"`
+ PwNew1 string `json:"pwNew1"`
+ PwOld string `json:"pwOld"`
+ }
+ if err := json.Unmarshal(reqJson, &req); err != nil {
+ return nil, err
+ }
+
+ if req.PwOld == "" || req.PwNew0 == "" || req.PwNew0 != req.PwNew1 {
+ return nil, fmt.Errorf("invalid input")
+ }
+
+ if err := login_check.Password(ctx, tx, loginId, req.PwOld); err != nil {
+ return nil, err
+ }
+ if err := login_check.PasswordComplexity(req.PwNew0); err != nil {
+ return nil, err
+ }
+
+ salt, hash := login.GenerateSaltHash(req.PwNew0)
+ return nil, login.SetSaltHash_tx(ctx, tx, salt, hash, loginId)
+}
diff --git a/request/request_login_session.go b/request/request_login_session.go
new file mode 100644
index 00000000..312c9041
--- /dev/null
+++ b/request/request_login_session.go
@@ -0,0 +1,36 @@
+package request
+
+import (
+ "context"
+ "encoding/json"
+ "r3/login/login_session"
+
+ "github.com/jackc/pgx/v5"
+ "github.com/jackc/pgx/v5/pgtype"
+)
+
+func LoginSessionConcurrentGet_tx(ctx context.Context, tx pgx.Tx) (interface{}, error) {
+ var err error
+ var res struct {
+ Full int64 `json:"full"`
+ Limited int64 `json:"limited"`
+ }
+
+ res.Full, res.Limited, err = login_session.LogsGetConcurrentCounts_tx(ctx, tx)
+ return res, err
+}
+
+func LoginSessionsGet_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+ var req struct {
+ ByString pgtype.Text `json:"byString"`
+ Limit int `json:"limit"`
+ Offset int `json:"offset"`
+ OrderBy string `json:"orderBy"`
+ OrderAsc bool `json:"orderAsc"`
+ }
+
+ if err := json.Unmarshal(reqJson, &req); err != nil {
+ return nil, err
+ }
+ return login_session.LogsGet_tx(ctx, tx, req.ByString, req.Limit, req.Offset, req.OrderBy, req.OrderAsc)
+}
diff --git a/request/request_setting.go b/request/request_login_setting.go
similarity index 52%
rename from request/request_setting.go
rename to request/request_login_setting.go
index aa89abd5..af47fa20 100644
--- a/request/request_setting.go
+++ b/request/request_login_setting.go
@@ -1,26 +1,27 @@
package request
import (
+ "context"
"encoding/json"
- "r3/setting"
+ "r3/login/login_setting"
"r3/types"
"github.com/jackc/pgx/v5"
"github.com/jackc/pgx/v5/pgtype"
)
-func SettingsGet(loginId int64) (interface{}, error) {
- return setting.Get(
+func LoginSettingsGet_tx(ctx context.Context, tx pgx.Tx, loginId int64) (interface{}, error) {
+ return login_setting.Get_tx(ctx, tx,
pgtype.Int8{Int64: loginId, Valid: true},
pgtype.Int8{})
}
-func SettingsSet_tx(tx pgx.Tx, reqJson json.RawMessage, loginId int64) (interface{}, error) {
+func LoginSettingsSet_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage, loginId int64) (interface{}, error) {
var req types.Settings
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- return nil, setting.Set_tx(tx,
+ return nil, login_setting.Set_tx(ctx, tx,
pgtype.Int8{Int64: loginId, Valid: true},
pgtype.Int8{},
req, false)
diff --git a/request/request_loginTemplate.go b/request/request_login_template.go
similarity index 50%
rename from request/request_loginTemplate.go
rename to request/request_login_template.go
index b3eeb38d..f9e20166 100644
--- a/request/request_loginTemplate.go
+++ b/request/request_login_template.go
@@ -1,6 +1,7 @@
package request
import (
+ "context"
"encoding/json"
"r3/login/login_template"
"r3/types"
@@ -8,7 +9,7 @@ import (
"github.com/jackc/pgx/v5"
)
-func LoginTemplateDel_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+func LoginTemplateDel_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var req struct {
Id int64 `json:"id"`
}
@@ -16,9 +17,9 @@ func LoginTemplateDel_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- return nil, login_template.Del_tx(tx, req.Id)
+ return nil, login_template.Del_tx(ctx, tx, req.Id)
}
-func LoginTemplateGet(reqJson json.RawMessage) (interface{}, error) {
+func LoginTemplateGet_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var req struct {
ById int64 `json:"byId"`
}
@@ -26,13 +27,13 @@ func LoginTemplateGet(reqJson json.RawMessage) (interface{}, error) {
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- return login_template.Get(req.ById)
+ return login_template.Get_tx(ctx, tx, req.ById)
}
-func LoginTemplateSet_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+func LoginTemplateSet_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var req types.LoginTemplateAdmin
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- return login_template.Set_tx(tx, req)
+ return login_template.Set_tx(ctx, tx, req)
}
diff --git a/request/request_login_widgets.go b/request/request_login_widgets.go
new file mode 100644
index 00000000..3884fd2b
--- /dev/null
+++ b/request/request_login_widgets.go
@@ -0,0 +1,22 @@
+package request
+
+import (
+ "context"
+ "encoding/json"
+ "r3/login/login_widget"
+ "r3/types"
+
+ "github.com/jackc/pgx/v5"
+)
+
+func LoginWidgetGroupsGet_tx(ctx context.Context, tx pgx.Tx, loginId int64) (interface{}, error) {
+ return login_widget.Get_tx(ctx, tx, loginId)
+}
+func LoginWidgetGroupsSet_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage, loginId int64) (interface{}, error) {
+ var req []types.LoginWidgetGroup
+
+ if err := json.Unmarshal(reqJson, &req); err != nil {
+ return nil, err
+ }
+ return nil, login_widget.Set_tx(ctx, tx, loginId, req)
+}
diff --git a/request/request_lookups.go b/request/request_lookups.go
index fc23b2c2..fa90f6bc 100644
--- a/request/request_lookups.go
+++ b/request/request_lookups.go
@@ -1,16 +1,17 @@
package request
import (
+ "context"
"encoding/json"
"fmt"
"r3/cache"
"r3/config"
- "r3/db"
+ "github.com/jackc/pgx/v5"
"github.com/jackc/pgx/v5/pgtype"
)
-func LookupGet(reqJson json.RawMessage, loginId int64) (interface{}, error) {
+func lookupGet_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage, loginId int64) (interface{}, error) {
var req struct {
Name string `json:"name"`
@@ -23,21 +24,18 @@ func LookupGet(reqJson json.RawMessage, loginId int64) (interface{}, error) {
case "access":
return cache.GetAccessById(loginId)
- case "customizing":
+ case "feedback":
var res struct {
- CompanyName string `json:"companyName"`
- CompanyWelcome string `json:"companyWelcome"`
+ Feedback bool `json:"feedback"`
+ FeedbackUrl string `json:"feedbackUrl"`
}
- res.CompanyName = config.GetString("companyName")
- res.CompanyWelcome = config.GetString("companyWelcome")
+ res.Feedback = config.GetUint64("repoFeedback") == 1
+ res.FeedbackUrl = config.GetString("repoUrl")
return res, nil
- case "feedback":
- return config.GetUint64("repoFeedback"), nil
-
case "loginHasClient":
var hasClient bool
- err := db.Pool.QueryRow(db.Ctx, `
+ err := tx.QueryRow(ctx, `
SELECT EXISTS(
SELECT *
FROM instance.login_token_fixed
@@ -56,7 +54,7 @@ func LookupGet(reqJson json.RawMessage, loginId int64) (interface{}, error) {
Public pgtype.Text `json:"public"`
}
- err := db.Pool.QueryRow(db.Ctx, `
+ err := tx.QueryRow(ctx, `
SELECT key_private_enc, key_private_enc_backup, key_public
FROM instance.login
WHERE id = $1
diff --git a/request/request_mail.go b/request/request_mail.go
deleted file mode 100644
index c0cc767c..00000000
--- a/request/request_mail.go
+++ /dev/null
@@ -1,122 +0,0 @@
-package request
-
-import (
- "encoding/json"
- "r3/cache"
- "r3/db"
- "r3/mail"
- "r3/types"
-
- "github.com/jackc/pgx/v5"
-)
-
-// mails from spooler
-func MailDel_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
-
- var req struct {
- Ids []int64 `json:"ids"`
- }
-
- if err := json.Unmarshal(reqJson, &req); err != nil {
- return nil, err
- }
- return nil, mail.Del_tx(tx, req.Ids)
-}
-
-func MailGet(reqJson json.RawMessage) (interface{}, error) {
-
- var (
- err error
- req struct {
- Limit int `json:"limit"`
- Offset int `json:"offset"`
- Search string `json:"search"`
- }
- res struct {
- Mails []types.Mail `json:"mails"`
- Total int64 `json:"total"`
- }
- )
-
- if err := json.Unmarshal(reqJson, &req); err != nil {
- return nil, err
- }
-
- res.Mails, res.Total, err = mail.Get(req.Limit, req.Offset, req.Search)
- if err != nil {
- return nil, err
- }
- return res, nil
-}
-
-// mail accounts
-func MailAccountDel_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
-
- var req struct {
- Id int64 `json:"id"`
- }
-
- if err := json.Unmarshal(reqJson, &req); err != nil {
- return nil, err
- }
- return nil, mail.DelAccount_tx(tx, req.Id)
-}
-
-func MailAccountGet() (interface{}, error) {
- var res struct {
- Accounts map[int32]types.MailAccount `json:"accounts"`
- }
- res.Accounts = cache.GetMailAccountMap()
- return res, nil
-}
-
-func MailAccountSet_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
-
- var req struct {
- Id int32 `json:"id"`
- Name string `json:"name"`
- Mode string `json:"mode"`
- SendAs string `json:"sendAs"`
- Username string `json:"username"`
- Password string `json:"password"`
- StartTls bool `json:"startTls"`
- HostName string `json:"hostName"`
- HostPort int64 `json:"hostPort"`
- }
- if err := json.Unmarshal(reqJson, &req); err != nil {
- return nil, err
- }
-
- if err := mail.SetAccount_tx(tx, req.Id, req.Name, req.Mode, req.SendAs,
- req.Username, req.Password, req.StartTls, req.HostName, req.HostPort); err != nil {
-
- return nil, err
- }
- return nil, nil
-}
-
-func MailAccountReload() (interface{}, error) {
- return nil, cache.LoadMailAccountMap()
-}
-
-func MailAccountTest_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
-
- var req struct {
- AccountName string `json:"accountName"`
- Recipient string `json:"recipient"`
- Subject string `json:"subject"`
- }
-
- if err := json.Unmarshal(reqJson, &req); err != nil {
- return nil, err
- }
-
- body := "If you can read this, your mail configuration appears to work."
-
- if _, err := tx.Exec(db.Ctx, `
- SELECT instance.mail_send($1,$2,$3,'','',$4)
- `, req.Subject, body, req.Recipient, req.AccountName); err != nil {
- return nil, err
- }
- return nil, nil
-}
diff --git a/request/request_mail_account.go b/request/request_mail_account.go
new file mode 100644
index 00000000..807d3936
--- /dev/null
+++ b/request/request_mail_account.go
@@ -0,0 +1,92 @@
+package request
+
+import (
+ "context"
+ "encoding/json"
+ "errors"
+ "r3/cache"
+ "r3/types"
+
+ "github.com/jackc/pgx/v5"
+)
+
+func MailAccountDel_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+ var req struct {
+ Id int64 `json:"id"`
+ }
+
+ if err := json.Unmarshal(reqJson, &req); err != nil {
+ return nil, err
+ }
+
+ _, err := tx.Exec(ctx, `
+ DELETE FROM instance.mail_account
+ WHERE id = $1
+ `, req.Id)
+ return nil, err
+}
+
+func MailAccountGet() (interface{}, error) {
+ return cache.GetMailAccountMap(), nil
+}
+
+func MailAccountSet_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+ var req types.MailAccount
+ if err := json.Unmarshal(reqJson, &req); err != nil {
+ return nil, err
+ }
+
+ var err error
+ newRecord := req.Id == 0
+
+ if req.AuthMethod == "xoauth2" {
+ if !req.OauthClientId.Valid {
+ return nil, errors.New("cannot set email account with OAuth authentication but no OAuth client")
+ }
+ } else {
+ req.OauthClientId.Valid = false
+ }
+
+ if newRecord {
+ _, err = tx.Exec(ctx, `
+ INSERT INTO instance.mail_account (oauth_client_id, name, mode,
+ auth_method, send_as, username, password, start_tls, host_name,
+ host_port, comment)
+ VALUES ($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11)
+ `, req.OauthClientId, req.Name, req.Mode, req.AuthMethod, req.SendAs,
+ req.Username, req.Password, req.StartTls, req.HostName, req.HostPort,
+ req.Comment)
+ } else {
+ _, err = tx.Exec(ctx, `
+ UPDATE instance.mail_account
+ SET oauth_client_id = $1, name = $2, mode = $3, auth_method = $4,
+ send_as = $5, username = $6, password = $7, start_tls = $8,
+ host_name = $9, host_port = $10, comment = $11
+ WHERE id = $12
+ `, req.OauthClientId, req.Name, req.Mode, req.AuthMethod, req.SendAs,
+ req.Username, req.Password, req.StartTls, req.HostName, req.HostPort,
+ req.Comment, req.Id)
+ }
+ return nil, err
+}
+
+func MailAccountTest_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+
+ var req struct {
+ AccountName string `json:"accountName"`
+ Recipient string `json:"recipient"`
+ Subject string `json:"subject"`
+ }
+
+ if err := json.Unmarshal(reqJson, &req); err != nil {
+ return nil, err
+ }
+
+ body := "If you can read this, your mail configuration appears to work."
+
+ _, err := tx.Exec(ctx, `
+ SELECT instance.mail_send($1,$2,$3,'','',$4)
+ `, req.Subject, body, req.Recipient, req.AccountName)
+
+ return nil, err
+}
diff --git a/mail/mail.go b/request/request_mail_spooler.go
similarity index 55%
rename from mail/mail.go
rename to request/request_mail_spooler.go
index 39df7c51..5f57b20f 100644
--- a/mail/mail.go
+++ b/request/request_mail_spooler.go
@@ -1,29 +1,59 @@
-package mail
+package request
import (
+ "context"
+ "encoding/json"
"fmt"
- "r3/db"
"r3/types"
"github.com/jackc/pgx/v5"
)
-var searchFields = []string{"from_list", "to_list", "cc_list", "bcc_list", "subject", "body"}
+func MailSpoolerDel_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+ var req struct {
+ Ids []int64 `json:"ids"`
+ }
+ if err := json.Unmarshal(reqJson, &req); err != nil {
+ return nil, err
+ }
+
+ _, err := tx.Exec(ctx, `
+ DELETE FROM instance.mail_spool
+ WHERE id = ANY($1)
+ `, req.Ids)
+
+ return nil, err
+}
-// mail spooler
-func Del_tx(tx pgx.Tx, ids []int64) error {
- for _, id := range ids {
- if _, err := tx.Exec(db.Ctx, `
- DELETE FROM instance.mail_spool
- WHERE id = $1
- `, id); err != nil {
- return err
+func MailSpoolerGet_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+
+ var (
+ err error
+ req struct {
+ Limit int `json:"limit"`
+ Offset int `json:"offset"`
+ Search string `json:"search"`
+ }
+ res struct {
+ Mails []types.Mail `json:"mails"`
+ Total int64 `json:"total"`
}
+ )
+
+ if err := json.Unmarshal(reqJson, &req); err != nil {
+ return nil, err
}
- return nil
+
+ res.Mails, res.Total, err = mailSpoolerRead(ctx, tx, req.Limit, req.Offset, req.Search)
+ if err != nil {
+ return nil, err
+ }
+ return res, nil
}
-func Get(limit int, offset int, search string) ([]types.Mail, int64, error) {
+func mailSpoolerRead(ctx context.Context, tx pgx.Tx, limit int, offset int, search string) ([]types.Mail, int64, error) {
+
+ var searchFields = []string{"from_list", "to_list", "cc_list", "bcc_list", "subject", "body"}
// prepare SQL request and arguments
sqlArgs := make([]interface{}, 0)
@@ -42,7 +72,7 @@ func Get(limit int, offset int, search string) ([]types.Mail, int64, error) {
}
mails := make([]types.Mail, 0)
- rows, err := db.Pool.Query(db.Ctx, fmt.Sprintf(`
+ rows, err := tx.Query(ctx, fmt.Sprintf(`
SELECT id, from_list, to_list, cc_list, bcc_list, subject,
body, attempt_count, attempt_date, outgoing, date,
mail_account_id, record_id_wofk, attribute_id,
@@ -94,7 +124,7 @@ func Get(limit int, offset int, search string) ([]types.Mail, int64, error) {
}
var total int64
- if err := db.Pool.QueryRow(db.Ctx, fmt.Sprintf(`
+ if err := tx.QueryRow(ctx, fmt.Sprintf(`
SELECT COUNT(*)
FROM instance.mail_spool
%s
@@ -104,37 +134,19 @@ func Get(limit int, offset int, search string) ([]types.Mail, int64, error) {
return mails, total, nil
}
-// mail accounts
-func DelAccount_tx(tx pgx.Tx, id int64) error {
- _, err := tx.Exec(db.Ctx, `
- DELETE FROM instance.mail_account
- WHERE id = $1
- `, id)
- return err
-}
-
-func SetAccount_tx(tx pgx.Tx, id int32, name string, mode string, sendAs string,
- username string, password string, startTls bool, hostName string, hostPort int64) error {
+func MailSpoolerReset_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+ var req struct {
+ Ids []int64 `json:"ids"`
+ }
+ if err := json.Unmarshal(reqJson, &req); err != nil {
+ return nil, err
+ }
- newRecord := id == 0
+ _, err := tx.Exec(ctx, `
+ UPDATE instance.mail_spool
+ SET attempt_count = 0, attempt_date = 0
+ WHERE id = ANY($1)
+ `, req.Ids)
- if newRecord {
- if _, err := tx.Exec(db.Ctx, `
- INSERT INTO instance.mail_account (name, mode, send_as, username,
- password, start_tls, host_name, host_port)
- VALUES ($1,$2,$3,$4,$5,$6,$7,$8)
- `, name, mode, sendAs, username, password, startTls, hostName, hostPort); err != nil {
- return err
- }
- } else {
- if _, err := tx.Exec(db.Ctx, `
- UPDATE instance.mail_account
- SET name = $1, mode = $2, send_as = $3, username = $4, password = $5,
- start_tls = $6, host_name = $7, host_port = $8
- WHERE id = $9
- `, name, mode, sendAs, username, password, startTls, hostName, hostPort, id); err != nil {
- return err
- }
- }
- return nil
+ return nil, err
}
diff --git a/request/request_mail_traffic.go b/request/request_mail_traffic.go
new file mode 100644
index 00000000..d4e4e365
--- /dev/null
+++ b/request/request_mail_traffic.go
@@ -0,0 +1,95 @@
+package request
+
+import (
+ "context"
+ "encoding/json"
+ "fmt"
+ "r3/types"
+
+ "github.com/jackc/pgx/v5"
+)
+
+func MailTrafficGet_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+
+ var (
+ req struct {
+ Limit int `json:"limit"`
+ Offset int `json:"offset"`
+ Search string `json:"search"`
+ }
+ res struct {
+ Mails []types.MailTraffic `json:"mails"`
+ Total int64 `json:"total"`
+ }
+ )
+
+ if err := json.Unmarshal(reqJson, &req); err != nil {
+ return nil, err
+ }
+
+ var searchFields = []string{"from_list", "to_list", "cc_list", "bcc_list", "subject"}
+
+ // prepare SQL request and arguments
+ sqlArgs := make([]interface{}, 0)
+ sqlArgs = append(sqlArgs, req.Limit)
+ sqlArgs = append(sqlArgs, req.Offset)
+ sqlWhere := ""
+ if req.Search != "" {
+ for i, field := range searchFields {
+ connector := "WHERE"
+ if i != 0 {
+ connector = "OR"
+ }
+ sqlArgs = append(sqlArgs, fmt.Sprintf("%%%s%%", req.Search))
+ sqlWhere = fmt.Sprintf("%s%s %s ILIKE $%d\n", sqlWhere, connector, field, len(sqlArgs))
+ }
+ }
+
+ rows, err := tx.Query(ctx, fmt.Sprintf(`
+ SELECT from_list, to_list, cc_list, bcc_list,
+ subject, outgoing, date, files, mail_account_id
+ FROM instance.mail_traffic
+ %s
+ ORDER BY date DESC
+ LIMIT $1
+ OFFSET $2
+ `, sqlWhere), sqlArgs...)
+ if err != nil {
+ return nil, err
+ }
+ defer rows.Close()
+
+ res.Mails = make([]types.MailTraffic, 0)
+ for rows.Next() {
+ var m types.MailTraffic
+ if err := rows.Scan(&m.FromList, &m.ToList, &m.CcList, &m.BccList,
+ &m.Subject, &m.Outgoing, &m.Date, &m.Files, &m.AccountId); err != nil {
+
+ return nil, err
+ }
+ res.Mails = append(res.Mails, m)
+ }
+
+ // get total count
+ sqlArgs = make([]interface{}, 0)
+ sqlWhere = ""
+ if req.Search != "" {
+ for i, field := range searchFields {
+ connector := "WHERE"
+ if i != 0 {
+ connector = "OR"
+ }
+ sqlArgs = append(sqlArgs, fmt.Sprintf("%%%s%%", req.Search))
+ sqlWhere = fmt.Sprintf("%s%s %s ILIKE $%d\n", sqlWhere, connector, field, len(sqlArgs))
+ }
+ }
+
+ if err := tx.QueryRow(ctx, fmt.Sprintf(`
+ SELECT COUNT(*)
+ FROM instance.mail_traffic
+ %s
+ `, sqlWhere), sqlArgs...).Scan(&res.Total); err != nil {
+ return nil, err
+ }
+ return res, nil
+}
diff --git a/request/request_menu.go b/request/request_menu.go
deleted file mode 100644
index 1be9553e..00000000
--- a/request/request_menu.go
+++ /dev/null
@@ -1,68 +0,0 @@
-package request
-
-import (
- "encoding/json"
- "r3/schema/menu"
- "r3/types"
-
- "github.com/gofrs/uuid"
- "github.com/jackc/pgx/v5"
- "github.com/jackc/pgx/v5/pgtype"
-)
-
-func MenuCopy_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
-
- var req struct {
- ModuleId uuid.UUID `json:"moduleId"`
- ModuleIdNew uuid.UUID `json:"moduleIdNew"`
- }
-
- if err := json.Unmarshal(reqJson, &req); err != nil {
- return nil, err
- }
- return nil, menu.Copy_tx(tx, req.ModuleId, req.ModuleIdNew)
-}
-
-func MenuDel_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
-
- var req struct {
- Id uuid.UUID `json:"id"`
- }
-
- if err := json.Unmarshal(reqJson, &req); err != nil {
- return nil, err
- }
- return nil, menu.Del_tx(tx, req.Id)
-}
-
-func MenuGet(reqJson json.RawMessage) (interface{}, error) {
-
- var (
- err error
- req struct {
- ModuleId uuid.UUID `json:"moduleId"`
- }
- res struct {
- Menus []types.Menu `json:"menus"`
- }
- )
-
- if err := json.Unmarshal(reqJson, &req); err != nil {
- return nil, err
- }
- res.Menus, err = menu.Get(req.ModuleId, pgtype.UUID{})
- if err != nil {
- return nil, err
- }
- return res, nil
-}
-
-func MenuSet_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
-
- var req []types.Menu
-
- if err := json.Unmarshal(reqJson, &req); err != nil {
- return nil, err
- }
- return nil, menu.Set_tx(tx, pgtype.UUID{}, req)
-}
diff --git a/request/request_menuTab.go b/request/request_menuTab.go
new file mode 100644
index 00000000..c1e1a607
--- /dev/null
+++ b/request/request_menuTab.go
@@ -0,0 +1,30 @@
+package request
+
+import (
+ "context"
+ "encoding/json"
+ "r3/schema/menuTab"
+ "r3/types"
+
+ "github.com/gofrs/uuid"
+ "github.com/jackc/pgx/v5"
+)
+
+func MenuTabDel_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+ var req uuid.UUID
+ if err := json.Unmarshal(reqJson, &req); err != nil {
+ return nil, err
+ }
+ return nil, menuTab.Del_tx(ctx, tx, req)
+}
+
+func MenuTabSet_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+ var req struct {
+ MenuTab types.MenuTab `json:"menuTab"`
+ Position int `json:"position"`
+ }
+ if err := json.Unmarshal(reqJson, &req); err != nil {
+ return nil, err
+ }
+ return nil, menuTab.Set_tx(ctx, tx, req.Position, req.MenuTab)
+}
diff --git a/request/request_module.go b/request/request_module.go
index 03c7dac7..d6e2bd4e 100644
--- a/request/request_module.go
+++ b/request/request_module.go
@@ -1,6 +1,7 @@
package request
import (
+ "context"
"encoding/json"
"r3/schema/module"
"r3/transfer"
@@ -10,8 +11,7 @@ import (
"github.com/jackc/pgx/v5"
)
-func ModuleCheckChange_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
-
+func ModuleCheckChange_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var (
err error
req struct {
@@ -26,48 +26,27 @@ func ModuleCheckChange_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, erro
return nil, err
}
- res.ModuleIdMapChanged, err = transfer.GetModuleChangedWithDependencies(req.Id)
+ res.ModuleIdMapChanged, err = transfer.GetModuleChangedWithDependencies_tx(ctx, tx, req.Id)
if err != nil {
return nil, err
}
return res, nil
}
-func ModuleDel_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
-
+func ModuleDel_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var req struct {
Id uuid.UUID `json:"id"`
}
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- return nil, module.Del_tx(tx, req.Id)
-}
-
-func ModuleGet() (interface{}, error) {
-
- var (
- err error
- res struct {
- Modules []types.Module `json:"modules"`
- }
- )
-
- res.Modules, err = module.Get([]uuid.UUID{})
- if err != nil {
- return nil, err
- }
- return res, nil
+ return nil, module.Del_tx(ctx, tx, req.Id)
}
-func ModuleSet_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
-
+func ModuleSet_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var req types.Module
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- return nil, module.Set_tx(tx, req.Id, req.ParentId, req.FormId, req.IconId,
- req.Name, req.Color1, req.Position, req.LanguageMain, req.ReleaseBuild,
- req.ReleaseBuildApp, req.ReleaseDate, req.DependsOn, req.StartForms,
- req.Languages, req.ArticleIdsHelp, req.Captions)
+ return module.SetReturnId_tx(ctx, tx, req)
}
diff --git a/request/request_module_meta.go b/request/request_module_meta.go
new file mode 100644
index 00000000..305a08ea
--- /dev/null
+++ b/request/request_module_meta.go
@@ -0,0 +1,32 @@
+package request
+
+import (
+ "context"
+ "encoding/json"
+ "r3/config/module_meta"
+ "r3/types"
+
+ "github.com/gofrs/uuid"
+ "github.com/jackc/pgx/v5"
+)
+
+func ModuleMetaSetLanguagesCustom_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+ var req struct {
+ Id uuid.UUID `json:"id"`
+ Languages []string `json:"languages"`
+ }
+
+ if err := json.Unmarshal(reqJson, &req); err != nil {
+ return nil, err
+ }
+ return nil, module_meta.SetLanguagesCustom_tx(ctx, tx, req.Id, req.Languages)
+}
+
+func ModuleMetaSetOptions_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+ var req types.ModuleMeta
+
+ if err := json.Unmarshal(reqJson, &req); err != nil {
+ return nil, err
+ }
+ return nil, module_meta.SetOptions_tx(ctx, tx, req.Id, req.Hidden, req.Owner, req.Position)
+}
diff --git a/request/request_module_option.go b/request/request_module_option.go
deleted file mode 100644
index 241517ef..00000000
--- a/request/request_module_option.go
+++ /dev/null
@@ -1,36 +0,0 @@
-package request
-
-import (
- "encoding/json"
- "r3/module_option"
- "r3/types"
-
- "github.com/jackc/pgx/v5"
-)
-
-func ModuleOptionGet() (interface{}, error) {
-
- var (
- err error
- res []types.ModuleOption
- )
-
- res, err = module_option.Get()
- if err != nil {
- return nil, err
- }
- return res, nil
-}
-
-func ModuleOptionSet_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
-
- var req types.ModuleOption
-
- if err := json.Unmarshal(reqJson, &req); err != nil {
- return nil, err
- }
- if err := module_option.Set_tx(tx, req.Id, req.Hidden, req.Owner, req.Position); err != nil {
- return nil, err
- }
- return nil, nil
-}
diff --git a/request/request_oauth_client.go b/request/request_oauth_client.go
new file mode 100644
index 00000000..40dd98e7
--- /dev/null
+++ b/request/request_oauth_client.go
@@ -0,0 +1,61 @@
+package request
+
+import (
+ "context"
+ "encoding/json"
+ "r3/cache"
+ "r3/types"
+
+ "github.com/jackc/pgx/v5"
+)
+
+func OauthClientDel_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+ var req struct {
+ Id int64 `json:"id"`
+ }
+ if err := json.Unmarshal(reqJson, &req); err != nil {
+ return nil, err
+ }
+
+ _, err := tx.Exec(ctx, `
+ DELETE FROM instance.oauth_client
+ WHERE id = $1
+ `, req.Id)
+ return nil, err
+}
+
+func OauthClientGet() (interface{}, error) {
+ return cache.GetOauthClientMap(), nil
+}
+
+func OauthClientReload_tx(ctx context.Context, tx pgx.Tx) (interface{}, error) {
+ return nil, cache.LoadOauthClientMap_tx(ctx, tx)
+}
+
+func OauthClientSet_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+ var req types.OauthClient
+ if err := json.Unmarshal(reqJson, &req); err != nil {
+ return nil, err
+ }
+
+ var err error
+ newRecord := req.Id == 0
+
+ if newRecord {
+ _, err = tx.Exec(ctx, `
+ INSERT INTO instance.oauth_client (name, client_id, client_secret,
+ date_expiry, scopes, tenant, token_url)
+ VALUES ($1,$2,$3,$4,$5,$6,$7)
+ `, req.Name, req.ClientId, req.ClientSecret, req.DateExpiry,
+ req.Scopes, req.Tenant, req.TokenUrl)
+ } else {
+ _, err = tx.Exec(ctx, `
+ UPDATE instance.oauth_client
+ SET name = $1, client_id = $2, client_secret = $3, date_expiry = $4,
+ scopes = $5, tenant = $6, token_url = $7
+ WHERE id = $8
+ `, req.Name, req.ClientId, req.ClientSecret, req.DateExpiry,
+ req.Scopes, req.Tenant, req.TokenUrl, req.Id)
+ }
+ return nil, err
+}
diff --git a/request/request_package.go b/request/request_package.go
index e4c764ae..12ea525d 100644
--- a/request/request_package.go
+++ b/request/request_package.go
@@ -1,14 +1,17 @@
package request
import (
- "io/ioutil"
+ "context"
+ "os"
"r3/cache"
"r3/config"
"r3/tools"
"r3/transfer"
+
+ "github.com/jackc/pgx/v5"
)
-func PackageInstall() (interface{}, error) {
+func PackageInstall_tx(ctx context.Context, tx pgx.Tx) (interface{}, error) {
// store package file from embedded binary data to temp folder
filePath, err := tools.GetUniqueFilePath(config.File.Paths.Temp, 8999999, 9999999)
@@ -16,9 +19,9 @@ func PackageInstall() (interface{}, error) {
return nil, err
}
- if err := ioutil.WriteFile(filePath, cache.Package_CoreCompany, 0644); err != nil {
+ if err := os.WriteFile(filePath, cache.Package_CoreCompany, 0644); err != nil {
return nil, err
}
- return nil, transfer.ImportFromFiles([]string{filePath})
+ return nil, transfer.ImportFromFiles_tx(ctx, tx, []string{filePath})
}
diff --git a/request/request_password.go b/request/request_password.go
deleted file mode 100644
index 2cc52fb7..00000000
--- a/request/request_password.go
+++ /dev/null
@@ -1,21 +0,0 @@
-package request
-
-import (
- "encoding/json"
- "r3/password"
-
- "github.com/jackc/pgx/v5"
-)
-
-func PasswortSet_tx(tx pgx.Tx, reqJson json.RawMessage, loginId int64) (interface{}, error) {
-
- var req struct {
- PwNew0 string `json:"pwNew0"`
- PwNew1 string `json:"pwNew1"`
- PwOld string `json:"pwOld"`
- }
- if err := json.Unmarshal(reqJson, &req); err != nil {
- return nil, err
- }
- return nil, password.Set_tx(tx, loginId, req.PwOld, req.PwNew0, req.PwNew1)
-}
diff --git a/request/request_pgFunction.go b/request/request_pgFunction.go
index 9fac5ec8..b95295e9 100644
--- a/request/request_pgFunction.go
+++ b/request/request_pgFunction.go
@@ -1,10 +1,11 @@
package request
import (
+ "context"
"encoding/json"
"fmt"
"r3/cache"
- "r3/db"
+ "r3/handler"
"r3/schema/pgFunction"
"r3/types"
"strings"
@@ -13,7 +14,7 @@ import (
"github.com/jackc/pgx/v5"
)
-func PgFunctionDel_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+func PgFunctionDel_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var req struct {
Id uuid.UUID `json:"id"`
@@ -22,10 +23,10 @@ func PgFunctionDel_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- return nil, pgFunction.Del_tx(tx, req.Id)
+ return nil, pgFunction.Del_tx(ctx, tx, req.Id)
}
-func PgFunctionExec_tx(tx pgx.Tx, reqJson json.RawMessage, onlyFrontendFnc bool) (interface{}, error) {
+func PgFunctionExec_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage, onlyFrontendFnc bool) (interface{}, error) {
cache.Schema_mx.RLock()
defer cache.Schema_mx.RUnlock()
@@ -40,13 +41,13 @@ func PgFunctionExec_tx(tx pgx.Tx, reqJson json.RawMessage, onlyFrontendFnc bool)
fnc, exists := cache.PgFunctionIdMap[req.Id]
if !exists {
- return nil, fmt.Errorf("backend function (ID %s) does not exist", req.Id)
+ return nil, handler.ErrSchemaUnknownPgFunction(req.Id)
}
if fnc.IsTrigger {
- return nil, fmt.Errorf("backend function (ID %s) is a trigger function, it cannot be called directly", req.Id)
+ return nil, handler.ErrSchemaTriggerPgFunctionCall(req.Id)
}
if onlyFrontendFnc && !fnc.IsFrontendExec {
- return nil, fmt.Errorf("backend function (ID %s) may not be called from the frontend", req.Id)
+ return nil, handler.ErrSchemaBadFrontendExecPgFunctionCall(req.Id)
}
mod := cache.ModuleIdMap[fnc.ModuleId]
@@ -57,7 +58,7 @@ func PgFunctionExec_tx(tx pgx.Tx, reqJson json.RawMessage, onlyFrontendFnc bool)
}
var returnIf interface{}
- if err := tx.QueryRow(db.Ctx, fmt.Sprintf(`
+ if err := tx.QueryRow(ctx, fmt.Sprintf(`
SELECT "%s"."%s"(%s)
`, mod.Name, fnc.Name, strings.Join(placeholders, ",")),
req.Args...).Scan(&returnIf); err != nil {
@@ -67,26 +68,12 @@ func PgFunctionExec_tx(tx pgx.Tx, reqJson json.RawMessage, onlyFrontendFnc bool)
return returnIf, nil
}
-func PgFunctionGet(reqJson json.RawMessage) (interface{}, error) {
-
- var req struct {
- ModuleId uuid.UUID `json:"moduleId"`
- }
-
- if err := json.Unmarshal(reqJson, &req); err != nil {
- return nil, err
- }
- return pgFunction.Get(req.ModuleId)
-}
-
-func PgFunctionSet_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+func PgFunctionSet_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var req types.PgFunction
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- return nil, pgFunction.Set_tx(tx, req.ModuleId, req.Id, req.Name, req.CodeArgs,
- req.CodeFunction, req.CodeReturns, req.IsFrontendExec, req.IsTrigger,
- req.Schedules, req.Captions)
+ return nil, pgFunction.Set_tx(ctx, tx, req)
}
diff --git a/request/request_pgIndex.go b/request/request_pgIndex.go
index 396a8258..15520ee8 100644
--- a/request/request_pgIndex.go
+++ b/request/request_pgIndex.go
@@ -1,6 +1,7 @@
package request
import (
+ "context"
"encoding/json"
"r3/schema/pgIndex"
"r3/types"
@@ -9,27 +10,17 @@ import (
"github.com/jackc/pgx/v5"
)
-func PgIndexDel_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+func PgIndexDel_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var req struct {
Id uuid.UUID `json:"id"`
}
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- return nil, pgIndex.Del_tx(tx, req.Id)
+ return nil, pgIndex.Del_tx(ctx, tx, req.Id)
}
-func PgIndexGet(reqJson json.RawMessage) (interface{}, error) {
- var req struct {
- RelationId uuid.UUID `json:"relationId"`
- }
- if err := json.Unmarshal(reqJson, &req); err != nil {
- return nil, err
- }
- return pgIndex.Get(req.RelationId)
-}
-
-func PgIndexSet_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+func PgIndexSet_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var req types.PgIndex
if err := json.Unmarshal(reqJson, &req); err != nil {
@@ -39,5 +30,5 @@ func PgIndexSet_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
req.AutoFki = false
req.PrimaryKey = false
- return nil, pgIndex.Set_tx(tx, req)
+ return nil, pgIndex.Set_tx(ctx, tx, req)
}
diff --git a/request/request_pgTrigger.go b/request/request_pgTrigger.go
index 0ee1c172..b08096e4 100644
--- a/request/request_pgTrigger.go
+++ b/request/request_pgTrigger.go
@@ -1,6 +1,7 @@
package request
import (
+ "context"
"encoding/json"
"r3/schema/pgTrigger"
"r3/types"
@@ -9,7 +10,7 @@ import (
"github.com/jackc/pgx/v5"
)
-func PgTriggerDel_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+func PgTriggerDel_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var req struct {
Id uuid.UUID `json:"id"`
@@ -18,18 +19,14 @@ func PgTriggerDel_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- return nil, pgTrigger.Del_tx(tx, req.Id)
+ return nil, pgTrigger.Del_tx(ctx, tx, req.Id)
}
-func PgTriggerSet_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
-
+func PgTriggerSet_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var req types.PgTrigger
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- return nil, pgTrigger.Set_tx(tx, req.PgFunctionId, req.Id, req.RelationId,
- req.OnInsert, req.OnUpdate, req.OnDelete, req.IsConstraint,
- req.IsDeferrable, req.IsDeferred, req.PerRow, req.Fires,
- req.CodeCondition)
+ return nil, pgTrigger.Set_tx(ctx, tx, req)
}
diff --git a/request/request_preset.go b/request/request_preset.go
index 6ec60314..219b0d2f 100644
--- a/request/request_preset.go
+++ b/request/request_preset.go
@@ -1,6 +1,7 @@
package request
import (
+ "context"
"encoding/json"
"r3/schema/preset"
"r3/types"
@@ -9,7 +10,7 @@ import (
"github.com/jackc/pgx/v5"
)
-func PresetDel_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+func PresetDel_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var req struct {
Id uuid.UUID `json:"id"`
@@ -18,16 +19,16 @@ func PresetDel_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- return nil, preset.Del_tx(tx, req.Id)
+ return nil, preset.Del_tx(ctx, tx, req.Id)
}
-func PresetSet_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+func PresetSet_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var req types.Preset
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- return nil, preset.Set_tx(tx, req.RelationId, req.Id, req.Name,
- req.Protected, req.Values)
+ return nil, preset.Set_tx(ctx, tx, req.RelationId, req.Id,
+ req.Name, req.Protected, req.Values)
}
diff --git a/request/request_public.go b/request/request_public.go
index 330ab4e2..39f24b38 100644
--- a/request/request_public.go
+++ b/request/request_public.go
@@ -1,40 +1,79 @@
package request
import (
+ "math/rand"
"r3/cache"
"r3/config"
+ "r3/types"
+
+ "github.com/gofrs/uuid"
)
func PublicGet() (interface{}, error) {
- var res struct {
- Activated bool `json:"activated"`
- AppName string `json:"appName"`
- AppNameShort string `json:"appNameShort"`
- AppVersion string `json:"appVersion"`
- ClusterNodeName string `json:"clusterNodeName"`
- CompanyColorHeader string `json:"companyColorHeader"`
- CompanyColorLogin string `json:"companyColorLogin"`
- CompanyLogo string `json:"companyLogo"`
- CompanyLogoUrl string `json:"companyLogoUrl"`
- CompanyName string `json:"companyName"`
- CompanyWelcome string `json:"companyWelcome"`
- LanguageCodes []string `json:"languageCodes"`
- ProductionMode uint64 `json:"productionMode"`
- SchemaTimestamp int64 `json:"schemaTimestamp"`
+
+ // random background from available list
+ var loginBackgrounds = config.GetUint64Slice("loginBackgrounds")
+ var loginBackground uint64
+ if len(loginBackgrounds) == 0 {
+ loginBackground = 0
+ } else {
+ loginBackground = loginBackgrounds[rand.Intn(len(loginBackgrounds))]
}
- res.Activated = config.GetLicenseActive()
- res.AppName = config.GetString("appName")
- res.AppNameShort = config.GetString("appNameShort")
- res.AppVersion, _, _, _ = config.GetAppVersions()
- res.ClusterNodeName = cache.GetNodeName()
- res.CompanyColorHeader = config.GetString("companyColorHeader")
- res.CompanyColorLogin = config.GetString("companyColorLogin")
- res.CompanyLogo = config.GetString("companyLogo")
- res.CompanyLogoUrl = config.GetString("companyLogoUrl")
- res.CompanyName = config.GetString("companyName")
- res.CompanyWelcome = config.GetString("companyWelcome")
- res.LanguageCodes = cache.GetCaptionLanguageCodes()
- res.ProductionMode = config.GetUint64("productionMode")
- res.SchemaTimestamp = cache.GetSchemaTimestamp()
- return res, nil
+
+ return struct {
+ Activated bool `json:"activated"`
+ AppName string `json:"appName"`
+ AppNameShort string `json:"appNameShort"`
+ AppVersion string `json:"appVersion"`
+ CaptionMapCustom types.CaptionMapsAll `json:"captionMapCustom"`
+ ClusterNodeName string `json:"clusterNodeName"`
+ CompanyColorHeader string `json:"companyColorHeader"`
+ CompanyColorLogin string `json:"companyColorLogin"`
+ CompanyLoginImage string `json:"companyLoginImage"`
+ CompanyLogo string `json:"companyLogo"`
+ CompanyLogoUrl string `json:"companyLogoUrl"`
+ CompanyName string `json:"companyName"`
+ CompanyWelcome string `json:"companyWelcome"`
+ Css string `json:"css"`
+ LanguageCodes []string `json:"languageCodes"`
+ LoginBackground uint64 `json:"loginBackground"`
+ Mirror bool `json:"mirror"`
+ ModuleIdMapMeta map[uuid.UUID]types.ModuleMeta `json:"moduleIdMapMeta"`
+ PresetIdMapRecordId map[uuid.UUID]int64 `json:"presetIdMapRecordId"`
+ ProductionMode uint64 `json:"productionMode"`
+ PwaDomainMap map[string]uuid.UUID `json:"pwaDomainMap"`
+ SearchDictionaries []string `json:"searchDictionaries"`
+ SystemMsg types.SystemMsg `json:"systemMsg"`
+ TokenKeepEnable bool `json:"tokenKeepEnable"`
+ }{
+ Activated: config.GetLicenseActive(),
+ AppName: config.GetString("appName"),
+ AppNameShort: config.GetString("appNameShort"),
+ AppVersion: config.GetAppVersion().Full,
+ CaptionMapCustom: cache.GetCaptionMapCustom(),
+ ClusterNodeName: cache.GetNodeName(),
+ CompanyColorHeader: config.GetString("companyColorHeader"),
+ CompanyColorLogin: config.GetString("companyColorLogin"),
+ CompanyLoginImage: config.GetString("companyLoginImage"),
+ CompanyLogo: config.GetString("companyLogo"),
+ CompanyLogoUrl: config.GetString("companyLogoUrl"),
+ CompanyName: config.GetString("companyName"),
+ CompanyWelcome: config.GetString("companyWelcome"),
+ Css: config.GetString("css"),
+ LanguageCodes: cache.GetCaptionLanguageCodes(),
+ LoginBackground: loginBackground,
+ Mirror: config.File.Mirror,
+ ModuleIdMapMeta: cache.GetModuleIdMapMeta(),
+ PresetIdMapRecordId: cache.GetPresetRecordIds(),
+ ProductionMode: config.GetUint64("productionMode"),
+ PwaDomainMap: cache.GetPwaDomainMap(),
+ SearchDictionaries: cache.GetSearchDictionaries(),
+ SystemMsg: types.SystemMsg{
+ Date0: config.GetUint64("systemMsgDate0"),
+ Date1: config.GetUint64("systemMsgDate1"),
+ Maintenance: config.GetUint64("systemMsgMaintenance") == 1,
+ Text: config.GetString("systemMsgText"),
+ },
+ TokenKeepEnable: config.GetUint64("tokenKeepEnable") == 1,
+ }, nil
}
diff --git a/request/request_pwaDomain.go b/request/request_pwaDomain.go
new file mode 100644
index 00000000..6da814f6
--- /dev/null
+++ b/request/request_pwaDomain.go
@@ -0,0 +1,32 @@
+package request
+
+import (
+ "context"
+ "encoding/json"
+
+ "github.com/gofrs/uuid"
+ "github.com/jackc/pgx/v5"
+)
+
+func PwaDomainSet_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+
+ var req map[string]uuid.UUID
+
+ if err := json.Unmarshal(reqJson, &req); err != nil {
+ return nil, err
+ }
+
+ if _, err := tx.Exec(ctx, `DELETE FROM instance.pwa_domain`); err != nil {
+ return nil, err
+ }
+
+ for domain, moduleId := range req {
+ if _, err := tx.Exec(ctx, `
+ INSERT INTO instance.pwa_domain (module_id, domain)
+ VALUES ($1,$2)
+ `, moduleId, domain); err != nil {
+ return nil, err
+ }
+ }
+ return nil, nil
+}
diff --git a/request/request_relation.go b/request/request_relation.go
index e5b40ba3..c0ddcda2 100644
--- a/request/request_relation.go
+++ b/request/request_relation.go
@@ -1,6 +1,7 @@
package request
import (
+ "context"
"encoding/json"
"r3/schema/relation"
"r3/types"
@@ -9,7 +10,7 @@ import (
"github.com/jackc/pgx/v5"
)
-func RelationDel_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+func RelationDel_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var req struct {
Id uuid.UUID `json:"id"`
}
@@ -17,40 +18,18 @@ func RelationDel_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- return nil, relation.Del_tx(tx, req.Id)
+ return nil, relation.Del_tx(ctx, tx, req.Id)
}
-func RelationGet(reqJson json.RawMessage) (interface{}, error) {
-
- var (
- err error
- req struct {
- ModuleId uuid.UUID `json:"moduleId"`
- }
- res struct {
- Relations []types.Relation `json:"relations"`
- }
- )
-
- if err := json.Unmarshal(reqJson, &req); err != nil {
- return nil, err
- }
- res.Relations, err = relation.Get(req.ModuleId)
- if err != nil {
- return nil, err
- }
- return res, nil
-}
-
-func RelationSet_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+func RelationSet_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var req types.Relation
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- return nil, relation.Set_tx(tx, req)
+ return nil, relation.Set_tx(ctx, tx, req)
}
-func RelationPreview(reqJson json.RawMessage) (interface{}, error) {
+func RelationPreview_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var req struct {
Id uuid.UUID `json:"id"`
@@ -61,5 +40,5 @@ func RelationPreview(reqJson json.RawMessage) (interface{}, error) {
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- return relation.GetPreview(req.Id, req.Limit, req.Offset)
+ return relation.GetPreview(ctx, tx, req.Id, req.Limit, req.Offset)
}
diff --git a/request/request_repo.go b/request/request_repo.go
index 7592b584..bb574a07 100644
--- a/request/request_repo.go
+++ b/request/request_repo.go
@@ -1,16 +1,17 @@
package request
import (
+ "context"
"encoding/json"
- "r3/db"
"r3/repo"
"r3/transfer"
"r3/types"
"github.com/gofrs/uuid"
+ "github.com/jackc/pgx/v5"
)
-func RepoModuleGet(reqJson json.RawMessage) (interface{}, error) {
+func RepoModuleGet_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var (
err error
@@ -18,10 +19,10 @@ func RepoModuleGet(reqJson json.RawMessage) (interface{}, error) {
ByString string `json:"byString"`
LanguageCode string `json:"languageCode"`
Limit int `json:"limit"`
- Offset int `json:"offset"`
GetInstalled bool `json:"getInstalled"`
- GetNew bool `json:"getNew"`
GetInStore bool `json:"getInStore"`
+ GetNew bool `json:"getNew"`
+ Offset int `json:"offset"`
}
res struct {
Count int `json:"count"`
@@ -32,9 +33,9 @@ func RepoModuleGet(reqJson json.RawMessage) (interface{}, error) {
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- res.RepoModules, res.Count, err = repo.GetModule(req.ByString,
- req.LanguageCode, req.Limit, req.Offset, req.GetInstalled,
- req.GetNew, req.GetInStore)
+ res.RepoModules, res.Count, err = repo.GetModule_tx(ctx, tx, req.ByString,
+ req.LanguageCode, req.Limit, req.Offset, req.GetInstalled, req.GetNew,
+ req.GetInStore)
if err != nil {
return nil, err
@@ -42,8 +43,7 @@ func RepoModuleGet(reqJson json.RawMessage) (interface{}, error) {
return res, nil
}
-func RepoModuleInstall(reqJson json.RawMessage) (interface{}, error) {
-
+func RepoModuleInstall_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var req struct {
FileId uuid.UUID `json:"fileId"`
}
@@ -56,16 +56,16 @@ func RepoModuleInstall(reqJson json.RawMessage) (interface{}, error) {
if err != nil {
return nil, err
}
- return nil, transfer.ImportFromFiles([]string{filePath})
+ return nil, transfer.ImportFromFiles_tx(ctx, tx, []string{filePath})
}
-func RepoModuleInstallAll() (interface{}, error) {
+func RepoModuleInstallAll_tx(ctx context.Context, tx pgx.Tx) (interface{}, error) {
// get all files to be updated from repository
fileIds := make([]uuid.UUID, 0)
filePaths := make([]string, 0)
- if err := db.Pool.QueryRow(db.Ctx, `
+ if err := tx.QueryRow(ctx, `
SELECT ARRAY_AGG(rm.file)
FROM app.module AS m
INNER JOIN instance.repo_module AS rm ON rm.module_id_wofk = m.id
@@ -81,9 +81,9 @@ func RepoModuleInstallAll() (interface{}, error) {
}
filePaths = append(filePaths, filePath)
}
- return nil, transfer.ImportFromFiles(filePaths)
+ return nil, transfer.ImportFromFiles_tx(ctx, tx, filePaths)
}
-func RepoModuleUpdate() (interface{}, error) {
- return nil, repo.Update()
+func RepoModuleUpdate_tx(ctx context.Context, tx pgx.Tx) (interface{}, error) {
+ return nil, repo.Update_tx(ctx, tx)
}
diff --git a/request/request_role.go b/request/request_role.go
index e375b008..88961e76 100644
--- a/request/request_role.go
+++ b/request/request_role.go
@@ -1,6 +1,7 @@
package request
import (
+ "context"
"encoding/json"
"r3/schema/role"
"r3/types"
@@ -9,7 +10,7 @@ import (
"github.com/jackc/pgx/v5"
)
-func RoleDel_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+func RoleDel_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var req struct {
Id uuid.UUID `json:"id"`
@@ -17,37 +18,14 @@ func RoleDel_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- return nil, role.Del_tx(tx, req.Id)
+ return nil, role.Del_tx(ctx, tx, req.Id)
}
-func RoleGet(reqJson json.RawMessage) (interface{}, error) {
-
- var (
- err error
- req struct {
- ModuleId uuid.UUID `json:"moduleId"`
- }
- res struct {
- Roles []types.Role `json:"roles"`
- }
- )
-
- if err := json.Unmarshal(reqJson, &req); err != nil {
- return nil, err
- }
-
- res.Roles, err = role.Get(req.ModuleId)
- if err != nil {
- return nil, err
- }
- return res, nil
-}
-
-func RoleSet_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+func RoleSet_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var req types.Role
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- return nil, role.Set_tx(tx, req)
+ return nil, role.Set_tx(ctx, tx, req)
}
diff --git a/request/request_scheduler.go b/request/request_scheduler.go
index 8fa28e96..b47f9d42 100644
--- a/request/request_scheduler.go
+++ b/request/request_scheduler.go
@@ -1,12 +1,13 @@
package request
import (
- "r3/db"
+ "context"
"github.com/gofrs/uuid"
+ "github.com/jackc/pgx/v5"
)
-func Get() (interface{}, error) {
+func schedulersGet_tx(ctx context.Context, tx pgx.Tx) (interface{}, error) {
type nodeMeta struct {
Name string `json:"name"`
@@ -28,7 +29,7 @@ func Get() (interface{}, error) {
}
tasks := make([]task, 0)
- rows, err := db.Pool.Query(db.Ctx, `
+ rows, err := tx.Query(ctx, `
SELECT fs.pg_function_id,
s.pg_function_schedule_id,
s.date_attempt,
@@ -54,11 +55,12 @@ func Get() (interface{}, error) {
) AS node_meta
FROM instance.schedule AS s
LEFT JOIN app.pg_function_schedule AS fs ON fs.id = s.pg_function_schedule_id
+ LEFT JOIN app.pg_function AS pg ON pg.id = fs.pg_function_id
LEFT JOIN instance.task AS t ON t.name = s.task_name
ORDER BY
- t.name ASC,
- fs.pg_function_id ASC,
- fs.id ASC
+ t.name ASC,
+ pg.module_id ASC,
+ pg.name ASC
`)
if err != nil {
return tasks, err
diff --git a/request/request_schema.go b/request/request_schema.go
index 1ae96f2d..d67e4b7f 100644
--- a/request/request_schema.go
+++ b/request/request_schema.go
@@ -1,6 +1,7 @@
package request
import (
+ "context"
"encoding/json"
"r3/cluster"
"r3/schema"
@@ -10,7 +11,7 @@ import (
"github.com/jackc/pgx/v5/pgtype"
)
-func SchemaCheck_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+func SchemaCheck_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var req struct {
ModuleId uuid.UUID `json:"moduleId"`
@@ -19,10 +20,10 @@ func SchemaCheck_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- return nil, schema.ValidateDependency_tx(tx, req.ModuleId)
+ return nil, schema.ValidateDependency_tx(ctx, tx, req.ModuleId)
}
-func SchemaReload(reqJson json.RawMessage) (interface{}, error) {
+func SchemaReload_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var req struct {
ModuleId pgtype.UUID `json:"moduleId"`
@@ -36,5 +37,5 @@ func SchemaReload(reqJson json.RawMessage) (interface{}, error) {
if req.ModuleId.Valid {
modIds = append(modIds, req.ModuleId.Bytes)
}
- return nil, cluster.SchemaChanged(true, true, modIds)
+ return nil, cluster.SchemaChanged_tx(ctx, tx, true, modIds)
}
diff --git a/request/request_system.go b/request/request_system.go
deleted file mode 100644
index 41bbb81b..00000000
--- a/request/request_system.go
+++ /dev/null
@@ -1,17 +0,0 @@
-package request
-
-import (
- "r3/config"
-)
-
-func SystemGet() (interface{}, error) {
-
- var res struct {
- AppBuild string `json:"appBuild"`
- EmbeddedDb bool `json:"embeddedDb"`
- }
- _, _, res.AppBuild, _ = config.GetAppVersions()
- res.EmbeddedDb = config.File.Db.Embedded
-
- return res, nil
-}
diff --git a/request/request_task.go b/request/request_task.go
index 3abc7bdf..3032fa92 100644
--- a/request/request_task.go
+++ b/request/request_task.go
@@ -1,16 +1,15 @@
package request
import (
+ "context"
"encoding/json"
- "r3/db"
- "r3/task"
+ "fmt"
"github.com/gofrs/uuid"
"github.com/jackc/pgx/v5"
)
-func TaskSet_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
-
+func TaskSet_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var req struct {
Active bool `json:"active"`
Interval int64 `json:"interval"`
@@ -20,10 +19,29 @@ func TaskSet_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- return nil, task.Set_tx(tx, req.Name, req.Interval, req.Active)
+
+ var activeOnly bool
+ if err := tx.QueryRow(ctx, `
+ SELECT active_only
+ FROM instance.task
+ WHERE name = $1
+ `, req.Name).Scan(&activeOnly); err != nil {
+ return nil, err
+ }
+
+ if activeOnly && !req.Active {
+ return nil, fmt.Errorf("cannot disable active-only task")
+ }
+
+ _, err := tx.Exec(ctx, `
+ UPDATE instance.task
+ SET interval_seconds = $1, active = $2
+ WHERE name = $3
+ `, req.Interval, req.Active, req.Name)
+ return nil, err
}
-func TaskRun(reqJson json.RawMessage) (interface{}, error) {
+func TaskRun_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var req struct {
// trigger PG function scheduler by ID
@@ -37,7 +55,7 @@ func TaskRun(reqJson json.RawMessage) (interface{}, error) {
return nil, err
}
- _, err := db.Pool.Exec(db.Ctx, `
+ _, err := tx.Exec(ctx, `
SELECT instance_cluster.run_task($1,$2,$3)
`, req.TaskName, req.PgFunctionId, req.PgFunctionScheduleId)
diff --git a/request/request_transfer.go b/request/request_transfer.go
index a1260169..5ee29739 100644
--- a/request/request_transfer.go
+++ b/request/request_transfer.go
@@ -1,6 +1,7 @@
package request
import (
+ "context"
"encoding/json"
"r3/transfer"
@@ -8,7 +9,7 @@ import (
"github.com/jackc/pgx/v5"
)
-func TransferAddVersion_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+func TransferAddVersion_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
var req struct {
ModuleId uuid.UUID `json:"moduleId"`
@@ -17,7 +18,7 @@ func TransferAddVersion_tx(tx pgx.Tx, reqJson json.RawMessage) (interface{}, err
if err := json.Unmarshal(reqJson, &req); err != nil {
return nil, err
}
- return nil, transfer.AddVersion_tx(tx, req.ModuleId)
+ return nil, transfer.AddVersion_tx(ctx, tx, req.ModuleId)
}
func TransferStoreExportKey(reqJson json.RawMessage) (interface{}, error) {
diff --git a/request/request_variable.go b/request/request_variable.go
new file mode 100644
index 00000000..bd4e6455
--- /dev/null
+++ b/request/request_variable.go
@@ -0,0 +1,27 @@
+package request
+
+import (
+ "context"
+ "encoding/json"
+ "r3/schema/variable"
+ "r3/types"
+
+ "github.com/gofrs/uuid"
+ "github.com/jackc/pgx/v5"
+)
+
+func VariableDel_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+ var req uuid.UUID
+ if err := json.Unmarshal(reqJson, &req); err != nil {
+ return nil, err
+ }
+ return nil, variable.Del_tx(ctx, tx, req)
+}
+
+func VariableSet_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+ var req types.Variable
+ if err := json.Unmarshal(reqJson, &req); err != nil {
+ return nil, err
+ }
+ return nil, variable.Set_tx(ctx, tx, req)
+}
diff --git a/request/request_widget.go b/request/request_widget.go
new file mode 100644
index 00000000..c4a71322
--- /dev/null
+++ b/request/request_widget.go
@@ -0,0 +1,31 @@
+package request
+
+import (
+ "context"
+ "encoding/json"
+ "r3/schema/widget"
+ "r3/types"
+
+ "github.com/gofrs/uuid"
+ "github.com/jackc/pgx/v5"
+)
+
+func WidgetDel_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+
+ var req struct {
+ Id uuid.UUID `json:"id"`
+ }
+ if err := json.Unmarshal(reqJson, &req); err != nil {
+ return nil, err
+ }
+ return nil, widget.Del_tx(ctx, tx, req.Id)
+}
+
+func WidgetSet_tx(ctx context.Context, tx pgx.Tx, reqJson json.RawMessage) (interface{}, error) {
+
+ var req types.Widget
+ if err := json.Unmarshal(reqJson, &req); err != nil {
+ return nil, err
+ }
+ return nil, widget.Set_tx(ctx, tx, req)
+}
diff --git a/scheduler/scheduler.go b/scheduler/scheduler.go
index 48ca20f3..8e2b42af 100644
--- a/scheduler/scheduler.go
+++ b/scheduler/scheduler.go
@@ -1,6 +1,7 @@
package scheduler
import (
+ "context"
"errors"
"fmt"
"os"
@@ -13,14 +14,17 @@ import (
"r3/db"
"r3/ldap/ldap_import"
"r3/log"
- "r3/mail/attach"
- "r3/mail/receive"
- "r3/mail/send"
"r3/repo"
"r3/schema"
+ "r3/spooler/mail_attach"
+ "r3/spooler/mail_receive"
+ "r3/spooler/mail_send"
+ "r3/spooler/rest_send"
"r3/tools"
"r3/transfer"
+ "slices"
"sync"
+ "sync/atomic"
"time"
"github.com/gofrs/uuid"
@@ -61,28 +65,38 @@ type taskSchedule struct {
}
var (
- change_mx = &sync.Mutex{}
- loadTasks = false // if true, tasks are reloaded from the database on next run
- loadCounter int = 0 // number of times tasks were loaded - used to check whether tasks were reloaded during execution
- nextExecutionUnix int64 = 0 // unix time of next (earliest) task to run
- tasks []task // all tasks
- OsExit chan os.Signal = make(chan os.Signal)
+ change_mx = &sync.Mutex{}
+ loadTasks = true // if true, tasks are reloaded from the database on next run
+ loadCounter int = 0 // number of times tasks were loaded - used to check whether tasks were reloaded during execution
+ nextExecutionUnix int64 = 0 // unix time of next (earliest) task to run
+ oneDayInSeconds int64 = 60 * 60 * 24
+ tasks []task // all tasks
+ tasksDisabledMirrorMode []string = []string{"adminMails", "backupRun", "mailAttach", "mailRetrieve", "mailSend", "restExecute"}
+ OsExit chan os.Signal = make(chan os.Signal)
// main loop
- loopInterval = time.Second * time.Duration(1) // loop interval
- loopIntervalStartWait = time.Second * time.Duration(10) // loop waits at start
- loopStopping = false // loop is stopping
+ loopInterval = time.Second * time.Duration(1) // loop interval
+ loopIntervalStartWait = time.Second * time.Duration(10) // loop waits at start
+ loopStopping atomic.Bool // loop is stopping
)
func Start() {
- change_mx.Lock()
- loadTasks = true
- change_mx.Unlock()
+ time.Sleep(loopIntervalStartWait)
+ log.Info("scheduler", "started")
+
+ for {
+ time.Sleep(loopInterval)
+ if loopStopping.Load() {
+ log.Info("scheduler", "stopped")
+ return
+ }
+ if err := runTasksBySchedule(); err != nil {
+ log.Error("scheduler", "failed to start tasks", err)
+ }
+ }
}
func Stop() {
- change_mx.Lock()
- loopStopping = true
- change_mx.Unlock()
+ loopStopping.Store(true)
log.Info("scheduler", "stopping")
}
@@ -98,23 +112,6 @@ func init() {
}
}
}()
-
- // main loop
- go func() {
- time.Sleep(loopIntervalStartWait)
- log.Info("scheduler", "started")
-
- for {
- time.Sleep(loopInterval)
- if loopStopping {
- log.Info("scheduler", "stopped")
- return
- }
- if err := runTasksBySchedule(); err != nil {
- log.Error("scheduler", "failed to start tasks", err)
- }
- }
- }()
}
// start tasks which schedules are due
@@ -281,7 +278,7 @@ func load() error {
tasks = nil
// get system tasks and their states
- rows, err := db.Pool.Query(db.Ctx, `
+ rows, err := db.Pool.Query(context.Background(), `
SELECT t.name, t.embedded_only, t.interval_seconds,
t.cluster_master_only, s.id, s.date_attempt, ns.date_attempt
FROM instance.task AS t
@@ -314,6 +311,9 @@ func load() error {
if embeddedOnly && !config.File.Db.Embedded {
continue
}
+ if config.File.Mirror && slices.Contains(tasksDisabledMirrorMode, t.name) {
+ continue
+ }
// for tasks that all nodes have to execute, get node specific schedules
if !s.clusterMasterOnly {
@@ -338,6 +338,9 @@ func load() error {
t.runNextUnix = getNextRunFromSchedule(s)
switch t.name {
+ case "adminMails":
+ t.nameLog = "Admin notification mails"
+ t.fn = adminMails
case "backupRun":
t.nameLog = "Integrated full backups"
t.fn = backup.Run
@@ -356,9 +359,15 @@ func load() error {
case "cleanupFiles":
t.nameLog = "Cleanup of not-referenced files"
t.fn = cleanUpFiles
+ case "cleanupMailTraffic":
+ t.nameLog = "Cleanup of mail traffic entries"
+ t.fn = cleanupMailTraffic
case "clusterCheckIn":
t.nameLog = "Cluster node check-in to database"
t.fn = cluster.CheckInNode
+ case "dbOptimize":
+ t.nameLog = "Database optimization"
+ t.fn = dbOptimize
case "clusterProcessEvents":
t.nameLog = "Cluster event processing"
t.fn = clusterProcessEvents
@@ -370,16 +379,22 @@ func load() error {
t.fn = ldap_import.RunAll
case "mailAttach":
t.nameLog = "Email attachment transfer"
- t.fn = attach.DoAll
+ t.fn = mail_attach.DoAll
case "mailRetrieve":
t.nameLog = "Email retrieval"
- t.fn = receive.DoAll
+ t.fn = mail_receive.DoAll
case "mailSend":
t.nameLog = "Email dispatch"
- t.fn = send.DoAll
+ t.fn = mail_send.DoAll
case "repoCheck":
t.nameLog = "Check for updates from repository"
t.fn = repo.Update
+ case "restExecute":
+ t.nameLog = "REST call execution"
+ t.fn = rest_send.DoAll
+ case "systemMsgMaintenance":
+ t.nameLog = "Set maintenance mode after system message"
+ t.fn = systemMsgMaintenance
case "updateCheck":
t.nameLog = "Check for platform updates from official website"
t.fn = updateCheck
@@ -394,7 +409,7 @@ func load() error {
if cache.GetIsClusterMaster() {
pgFunctionIdMapTasks := make(map[uuid.UUID]task)
- rows, err = db.Pool.Query(db.Ctx, `
+ rows, err = db.Pool.Query(context.Background(), `
SELECT f.name, fs.pg_function_id, fs.id, fs.at_hour, fs.at_minute,
fs.at_second, fs.at_day, fs.interval_type, fs.interval_value,
s.id, s.date_attempt
@@ -447,22 +462,24 @@ func load() error {
// helpers
func runPgFunction(pgFunctionId uuid.UUID) error {
+ ctx, ctxCanc := context.WithTimeout(context.Background(), db.CtxDefTimeoutPgFunc)
+ defer ctxCanc()
- tx, err := db.Pool.Begin(db.Ctx)
+ tx, err := db.Pool.Begin(ctx)
if err != nil {
return err
}
- defer tx.Rollback(db.Ctx)
+ defer tx.Rollback(ctx)
- modName, fncName, _, _, err := schema.GetPgFunctionDetailsById_tx(tx, pgFunctionId)
+ modName, fncName, _, _, err := schema.GetPgFunctionDetailsById_tx(ctx, tx, pgFunctionId)
if err != nil {
return err
}
- if _, err := tx.Exec(db.Ctx, fmt.Sprintf(`SELECT "%s"."%s"()`, modName, fncName)); err != nil {
+ if _, err := tx.Exec(ctx, fmt.Sprintf(`SELECT "%s"."%s"()`, modName, fncName)); err != nil {
return err
}
- return tx.Commit(db.Ctx)
+ return tx.Commit(ctx)
}
// get unix time and index of task schedule to run next
@@ -475,7 +492,7 @@ func getNextRunScheduleFromTask(t task) (int64, uuid.UUID) {
nextRunSchedule := getNextRunFromSchedule(s)
// apply schedule if
- // * next planned run is stoppped (-1)
+ // * next planned run is stopped (-1)
// * or this schedule is active and earlier than previous schedule
if nextRun == -1 || (nextRunSchedule != -1 && nextRunSchedule < nextRun) {
nextRun = nextRunSchedule
@@ -506,6 +523,7 @@ func getNextRunFromSchedule(s taskSchedule) int64 {
}
// more complex intervals, add dates and set to target day/time
+ // as no timezone is defined, tm will be in local time, which will affect all date operations
tm := time.Unix(s.runLastUnix, 0)
switch s.intervalType {
@@ -525,23 +543,23 @@ func getNextRunFromSchedule(s taskSchedule) int64 {
targetDay := tm.Day()
targetMonth := tm.Month()
+ // overwrite invalid inputs
+ s.atDay = schema.GetValidAtDay(s.intervalType, s.atDay)
+
switch s.intervalType {
case "weeks":
- // 6 is highest allowed value (0 = sunday, 6 = saturday)
- if s.atDay <= 6 {
- // add difference between target weekday and current weekday to target day
- targetDay += s.atDay - int(tm.Weekday())
- }
+ // add difference between target weekday and last ran weekday to target day
+ targetDay += s.atDay - int(tm.Weekday())
case "months":
// set specified day
targetDay = s.atDay
case "years":
// set to month january, adding days as specified (70 days will end up in March)
- targetMonth = 1
targetDay = s.atDay
+ targetMonth = 1
}
- // apply target month/day and time
+ // apply target month/day and time at local time
tm = time.Date(tm.Year(), targetMonth, targetDay, s.atHour, s.atMinute,
s.atSecond, 0, tm.Location())
@@ -560,7 +578,7 @@ func storeTaskDate(t task, dateContent string) error {
if t.taskSchedule.clusterMasterOnly {
// store cluster master schedule meta globally
- _, err := db.Pool.Exec(db.Ctx, fmt.Sprintf(`
+ _, err := db.Pool.Exec(context.Background(), fmt.Sprintf(`
UPDATE instance.schedule
SET date_%s = $1
WHERE id = $2
@@ -569,7 +587,7 @@ func storeTaskDate(t task, dateContent string) error {
} else {
// store node schedule meta independently
// insert is always 'attempt', while update can be either
- _, err := db.Pool.Exec(db.Ctx, fmt.Sprintf(`
+ _, err := db.Pool.Exec(context.Background(), fmt.Sprintf(`
INSERT INTO instance_cluster.node_schedule
(node_id, schedule_id, date_attempt, date_success)
VALUES ($1,$2,$3,0)
@@ -581,7 +599,7 @@ func storeTaskDate(t task, dateContent string) error {
}
// PG function schedule task, schedule meta always stored globally
- _, err := db.Pool.Exec(db.Ctx, fmt.Sprintf(`
+ _, err := db.Pool.Exec(context.Background(), fmt.Sprintf(`
UPDATE instance.schedule
SET date_%s = $1
WHERE id = $2
diff --git a/scheduler/scheduler_adminMails.go b/scheduler/scheduler_adminMails.go
new file mode 100644
index 00000000..7081e4d2
--- /dev/null
+++ b/scheduler/scheduler_adminMails.go
@@ -0,0 +1,163 @@
+package scheduler
+
+import (
+ "context"
+ "encoding/json"
+ "fmt"
+ "r3/config"
+ "r3/db"
+ "r3/log"
+ "r3/tools"
+ "slices"
+ "strings"
+ "time"
+
+ "github.com/jackc/pgx/v5"
+)
+
+func adminMails() error {
+
+ var templates = struct {
+ intro string
+ licenseExpirationBody string
+ licenseExpirationSubject string
+ oauthClientExpirationBody string
+ oauthClientExpirationSubject string
+ }{
+ intro: `You are receiving this message, because your email address has been added to the REI3 admin notification list.
+ To change this setting, please visit your REI3 instance: {URL}
`,
+ licenseExpirationBody: `Your license expires on: {DATE}
`,
+ licenseExpirationSubject: `Your REI3 Professional license is about to expire`,
+ oauthClientExpirationBody: `Your OAuth client expires on: {DATE}
`,
+ oauthClientExpirationSubject: `Your REI3 OAuth client is about to expire`,
+ }
+
+ ctx, ctxCanc := context.WithTimeout(context.Background(), db.CtxDefTimeoutSysTask)
+ defer ctxCanc()
+
+ var sendMail = func(subject string, body string, dateExpiration int64, reason string) error {
+ // get mail receivers
+ if config.GetString("adminMails") == "" {
+ log.Warning("server", "cannot send admin notification mails", fmt.Errorf("no mail receivers defined"))
+ return nil
+ }
+
+ var toList []string
+ if err := json.Unmarshal([]byte(config.GetString("adminMails")), &toList); err != nil {
+ return fmt.Errorf("cannot read admin mail receivers, %s", err.Error())
+ }
+
+ if len(toList) == 0 {
+ log.Warning("server", "cannot send admin notification mails", fmt.Errorf("no mail receivers defined"))
+ return nil
+ }
+
+ // apply intro
+ body = fmt.Sprintf("%s%s", templates.intro, body)
+
+ // replace known placeholders
+ body = strings.Replace(body, "{URL}", config.GetString("publicHostName"), -1)
+ body = strings.Replace(body, "{DATE}", time.Unix(dateExpiration, 0).String(), -1)
+
+ tx, err := db.Pool.Begin(ctx)
+ if err != nil {
+ return err
+ }
+ defer tx.Rollback(ctx)
+
+ if _, err := tx.Exec(ctx, `
+ SELECT instance.mail_send($1,$2,$3)
+ `, subject, body, strings.Join(toList, ",")); err != nil {
+ return err
+ }
+
+ if _, err := tx.Exec(ctx, `
+ UPDATE instance.admin_mail
+ SET date_last_sent = DATE_PART('EPOCH',CURRENT_DATE)
+ WHERE reason = $1
+ `, reason); err != nil {
+ return err
+ }
+ return tx.Commit(ctx)
+ }
+
+ // collect admin mail definitions
+ type adminMail struct {
+ reason string
+ daysBeforeList []int64
+ dateLastSent int64
+ }
+ adminMails := make([]adminMail, 0)
+
+ rows, err := db.Pool.Query(ctx, `
+ SELECT reason, days_before, date_last_sent
+ FROM instance.admin_mail
+ `)
+ if err != nil {
+ return err
+ }
+
+ for rows.Next() {
+ var am adminMail
+ if err := rows.Scan(&am.reason, &am.daysBeforeList, &am.dateLastSent); err != nil {
+ return err
+ }
+ adminMails = append(adminMails, am)
+ }
+ rows.Close()
+
+ // collect earliest expirying OAuth client
+ var dateExpirationOauth int64 = -1
+ if err := db.Pool.QueryRow(ctx, `
+ SELECT date_expiry
+ FROM instance.oauth_client
+ WHERE date_expiry > DATE_PART('EPOCH',CURRENT_DATE)
+ ORDER BY date_expiry ASC
+ LIMIT 1
+ `).Scan(&dateExpirationOauth); err != nil && err != pgx.ErrNoRows {
+ return err
+ }
+
+ // send admin mails
+ now := tools.GetTimeUnix()
+ reasonsSent := make([]string, 0) // avoid multiple mails for the same notification reason
+
+ for _, am := range adminMails {
+ for _, daysBefore := range am.daysBeforeList {
+ if slices.Contains(reasonsSent, am.reason) {
+ continue
+ }
+
+ var body, subject string
+ var dateExpiration int64
+
+ switch am.reason {
+ case "licenseExpiration":
+ if !config.GetLicenseUsed() {
+ continue
+ }
+ dateExpiration = config.GetLicenseValidUntil()
+ subject = templates.licenseExpirationSubject
+ body = templates.licenseExpirationBody
+
+ case "oauthClientExpiration":
+ if dateExpirationOauth == -1 {
+ continue
+ }
+ dateExpiration = dateExpirationOauth
+ subject = templates.oauthClientExpirationSubject
+ body = templates.oauthClientExpirationBody
+ }
+
+ dateNotifySend := dateExpiration - (daysBefore * oneDayInSeconds)
+ if now < dateNotifySend || am.dateLastSent > dateNotifySend {
+ continue
+ }
+ if err := sendMail(subject, body, dateExpiration, am.reason); err != nil {
+ return err
+ }
+ reasonsSent = append(reasonsSent, am.reason)
+ }
+ }
+ return nil
+}
diff --git a/scheduler/scheduler_cleanup.go b/scheduler/scheduler_cleanup.go
index 07cd9c92..f3290588 100644
--- a/scheduler/scheduler_cleanup.go
+++ b/scheduler/scheduler_cleanup.go
@@ -1,6 +1,7 @@
package scheduler
import (
+ "context"
"fmt"
"os"
"path/filepath"
@@ -14,7 +15,14 @@ import (
"github.com/gofrs/uuid"
)
-var oneDayInSeconds int64 = 60 * 60 * 24
+// optimize DB
+func dbOptimize() error {
+ ctx, ctxCanc := context.WithTimeout(context.Background(), db.CtxDefTimeoutDbTask)
+ defer ctxCanc()
+
+ _, err := db.Pool.Exec(ctx, `VACUUM`)
+ return err
+}
// deletes files older than 1 day from temporary directory
func cleanupTemp() error {
@@ -44,21 +52,36 @@ func cleanupTemp() error {
// deletes expired logs
func cleanupLogs() error {
+ ctx, ctxCanc := context.WithTimeout(context.Background(), db.CtxDefTimeoutDbTask)
+ defer ctxCanc()
keepForDays := config.GetUint64("logsKeepDays")
if keepForDays == 0 {
return nil
}
- deleteOlderMilli := (tools.GetTimeUnix() - (oneDayInSeconds * int64(keepForDays))) * 1000
-
- if _, err := db.Pool.Exec(db.Ctx, `
+ _, err := db.Pool.Exec(ctx, `
DELETE FROM instance.log
WHERE date_milli < $1
- `, deleteOlderMilli); err != nil {
- return err
+ `, (tools.GetTimeUnix()-(oneDayInSeconds*int64(keepForDays)))*1000)
+ return err
+}
+
+// deletes expired mail traffic entries
+func cleanupMailTraffic() error {
+ keepForDays := config.GetUint64("mailTrafficKeepDays")
+ if keepForDays == 0 {
+ return nil
}
- return nil
+
+ ctx, ctxCanc := context.WithTimeout(context.Background(), db.CtxDefTimeoutDbTask)
+ defer ctxCanc()
+
+ _, err := db.Pool.Exec(ctx, `
+ DELETE FROM instance.mail_traffic
+ WHERE date < $1
+ `, tools.GetTimeUnix()-(oneDayInSeconds*int64(keepForDays)))
+ return err
}
// removes files that were deleted from their attribute or that are not assigned to a record
@@ -69,7 +92,7 @@ func cleanUpFiles() error {
// delete file record assignments, if file link was deleted and retention has been reached
attributeIdsFile := make([]uuid.UUID, 0)
- if err := db.Pool.QueryRow(db.Ctx, `
+ if err := db.Pool.QueryRow(context.Background(), `
SELECT ARRAY_AGG(id)
FROM app.attribute
WHERE content = 'files'
@@ -78,7 +101,7 @@ func cleanUpFiles() error {
}
for _, atrId := range attributeIdsFile {
- if _, err := db.Pool.Exec(db.Ctx, fmt.Sprintf(`
+ if _, err := db.Pool.Exec(context.Background(), fmt.Sprintf(`
DELETE FROM instance_file."%s"
WHERE date_delete IS NOT NULL
AND date_delete < $1
@@ -100,7 +123,7 @@ func cleanUpFiles() error {
removeCnt := 0
fileVersions := make([]fileVersion, 0)
- rows, err := db.Pool.Query(db.Ctx, `
+ rows, err := db.Pool.Query(context.Background(), `
SELECT v.file_id, v.version
FROM instance.file_version AS v
@@ -152,7 +175,7 @@ func cleanUpFiles() error {
}
}
- if _, err := db.Pool.Exec(db.Ctx, `
+ if _, err := db.Pool.Exec(context.Background(), `
DELETE FROM instance.file_version
WHERE file_id = $1
AND version = $2
@@ -179,7 +202,7 @@ func cleanUpFiles() error {
// delete files that no records references
for {
fileIds := make([]uuid.UUID, 0)
- if err := db.Pool.QueryRow(db.Ctx, `
+ if err := db.Pool.QueryRow(context.Background(), `
SELECT ARRAY_AGG(id)
FROM instance.file
WHERE ref_counter = 0
@@ -194,7 +217,7 @@ func cleanUpFiles() error {
for _, fileId := range fileIds {
versions := make([]int64, 0)
- if err := db.Pool.QueryRow(db.Ctx, `
+ if err := db.Pool.QueryRow(context.Background(), `
SELECT ARRAY_AGG(version)
FROM instance.file_version
WHERE file_id = $1
@@ -223,7 +246,7 @@ func cleanUpFiles() error {
// either file version existed on disk and could be deleted or it didn´t exist
// either case we delete the file reference
- if _, err := db.Pool.Exec(db.Ctx, `
+ if _, err := db.Pool.Exec(context.Background(), `
DELETE FROM instance.file_version
WHERE file_id = $1
AND version = $2
@@ -243,7 +266,7 @@ func cleanUpFiles() error {
}
// delete references of files that have no versions left
- tag, err := db.Pool.Exec(db.Ctx, `
+ tag, err := db.Pool.Exec(context.Background(), `
DELETE FROM instance.file AS f
WHERE 0 = (
SELECT COUNT(*)
diff --git a/scheduler/scheduler_cluster.go b/scheduler/scheduler_cluster.go
index 6757336c..23847895 100644
--- a/scheduler/scheduler_cluster.go
+++ b/scheduler/scheduler_cluster.go
@@ -1,6 +1,7 @@
package scheduler
import (
+ "context"
"encoding/json"
"fmt"
"r3/cache"
@@ -9,13 +10,27 @@ import (
"r3/log"
"r3/types"
"syscall"
+
+ "github.com/gofrs/uuid"
+ "github.com/jackc/pgx/v5"
)
// collect cluster events from shared database for node to react to
func clusterProcessEvents() error {
+ ctx, ctxCanc := context.WithTimeout(context.Background(), db.CtxDefTimeoutSysTask)
+ defer ctxCanc()
+
+ tx, err := db.Pool.Begin(ctx)
+ if err != nil {
+ return err
+ }
+ defer tx.Rollback(ctx)
- rows, err := db.Pool.Query(db.Ctx, `
- SELECT content, payload
+ rows, err := tx.Query(ctx, `
+ SELECT content, payload,
+ COALESCE(target_address, ''),
+ COALESCE(target_device, 0),
+ COALESCE(target_login_id, 0)
FROM instance_cluster.node_event
WHERE node_id = $1
`, cache.GetNodeId())
@@ -26,7 +41,9 @@ func clusterProcessEvents() error {
events := make([]types.ClusterEvent, 0)
for rows.Next() {
var e types.ClusterEvent
- if err := rows.Scan(&e.Content, &e.Payload); err != nil {
+ if err := rows.Scan(&e.Content, &e.Payload, &e.Target.Address,
+ &e.Target.Device, &e.Target.LoginId); err != nil {
+
return err
}
events = append(events, e)
@@ -35,11 +52,11 @@ func clusterProcessEvents() error {
// no events, nothing to do
if len(events) == 0 {
- return nil
+ return tx.Commit(ctx)
}
// delete collected events
- if _, err := db.Pool.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
DELETE FROM instance_cluster.node_event
WHERE node_id = $1
`, cache.GetNodeId()); err != nil {
@@ -47,76 +64,102 @@ func clusterProcessEvents() error {
}
// react to collected events
+ collectionUpdates := make([]types.ClusterEventCollectionUpdated, 0)
+
for _, e := range events {
- log.Info("cluster", fmt.Sprintf("node is reacting to event '%s'", e.Content))
-
- switch e.Content {
- case "collectionUpdated":
- var p types.ClusterEventCollectionUpdated
- if err := json.Unmarshal(e.Payload, &p); err != nil {
- return err
- }
- err = cluster.CollectionUpdated(p.CollectionId, p.LoginIds)
- case "configChanged":
- var p types.ClusterEventConfigChanged
- if err := json.Unmarshal(e.Payload, &p); err != nil {
- return err
- }
- err = cluster.ConfigChanged(false, true, p.SwitchToMaintenance)
- case "filesCopied":
- var p types.ClusterEventFilesCopied
- if err := json.Unmarshal(e.Payload, &p); err != nil {
- return err
- }
- err = cluster.FilesCopied(false, p.LoginId,
- p.AttributeId, p.FileIds, p.RecordId)
- case "fileRequested":
- var p types.ClusterEventFileRequested
- if err := json.Unmarshal(e.Payload, &p); err != nil {
- return err
- }
- err = cluster.FileRequested(false, p.LoginId, p.AttributeId,
- p.FileId, p.FileHash, p.FileName, p.ChooseApp)
- case "loginDisabled":
- var p types.ClusterEventLogin
- if err := json.Unmarshal(e.Payload, &p); err != nil {
- return err
- }
- err = cluster.LoginDisabled(false, p.LoginId)
- case "loginReauthorized":
- var p types.ClusterEventLogin
- if err := json.Unmarshal(e.Payload, &p); err != nil {
- return err
- }
- err = cluster.LoginReauthorized(false, p.LoginId)
- case "loginReauthorizedAll":
- err = cluster.LoginReauthorizedAll(false)
- case "masterAssigned":
- var p types.ClusterEventMasterAssigned
- if err := json.Unmarshal(e.Payload, &p); err != nil {
- return err
- }
- err = cluster.MasterAssigned(p.State)
- case "schemaChanged":
- var p types.ClusterEventSchemaChanged
- if err := json.Unmarshal(e.Payload, &p); err != nil {
- return err
- }
- err = cluster.SchemaChanged(false, p.NewVersion, p.ModuleIdsUpdateOnly)
- case "tasksChanged":
- err = cluster.TasksChanged(false)
- case "taskTriggered":
- var p types.ClusterEventTaskTriggered
- if err := json.Unmarshal(e.Payload, &p); err != nil {
- return err
- }
- runTaskDirectly(p.TaskName, p.PgFunctionId, p.PgFunctionScheduleId)
- case "shutdownTriggered":
- OsExit <- syscall.SIGTERM
+ if err := clusterProcessEvent(ctx, tx, e, &collectionUpdates); err != nil {
+ return err
+ }
+ }
+
+ // apply collection updates
+ cluster.CollectionsUpdated(collectionUpdates)
+
+ return tx.Commit(ctx)
+}
+
+func clusterProcessEvent(ctx context.Context, tx pgx.Tx, e types.ClusterEvent, collectionUpdates *[]types.ClusterEventCollectionUpdated) error {
+
+ log.Info("cluster", fmt.Sprintf("node is reacting to event '%s'", e.Content))
+ var err error
+ var jsonPayload []byte
+
+ switch v := e.Payload.(type) {
+ case string:
+ jsonPayload = []byte(v)
+ }
+
+ switch e.Content {
+ case "clientEventsChanged":
+ err = cluster.ClientEventsChanged_tx(ctx, tx, false, e.Target.Address, e.Target.LoginId)
+ case "collectionUpdated":
+ var p types.ClusterEventCollectionUpdated
+ if err := json.Unmarshal(jsonPayload, &p); err != nil {
+ return err
+ }
+ *collectionUpdates = append(*collectionUpdates, p)
+ err = nil
+ case "configChanged":
+ var switchToMaintenance bool
+ if err := json.Unmarshal(jsonPayload, &switchToMaintenance); err != nil {
+ return err
+ }
+ err = cluster.ConfigChanged_tx(ctx, tx, false, true, switchToMaintenance)
+ case "filesCopied":
+ var p types.ClusterEventFilesCopied
+ if err := json.Unmarshal(jsonPayload, &p); err != nil {
+ return err
+ }
+ err = cluster.FilesCopied_tx(ctx, tx, false, e.Target.Address,
+ e.Target.LoginId, p.AttributeId, p.FileIds, p.RecordId)
+ case "fileRequested":
+ var p types.ClusterEventFileRequested
+ if err := json.Unmarshal(jsonPayload, &p); err != nil {
+ return err
+ }
+ err = cluster.FileRequested_tx(ctx, tx, false, e.Target.Address, e.Target.LoginId,
+ p.AttributeId, p.FileId, p.FileHash, p.FileName, p.ChooseApp)
+ case "jsFunctionCalled":
+ var p types.ClusterEventJsFunctionCalled
+ if err := json.Unmarshal(jsonPayload, &p); err != nil {
+ return err
+ }
+ err = cluster.JsFunctionCalled_tx(ctx, tx, false, e.Target.Address,
+ e.Target.LoginId, p.ModuleId, p.JsFunctionId, p.Arguments)
+ case "keystrokesRequested":
+ var keystrokes string
+ if err := json.Unmarshal(jsonPayload, &keystrokes); err != nil {
+ return err
+ }
+ err = cluster.KeystrokesRequested_tx(ctx, tx, false, e.Target.Address, e.Target.LoginId, keystrokes)
+ case "loginDisabled":
+ err = cluster.LoginDisabled_tx(ctx, tx, false, e.Target.LoginId)
+ case "loginReauthorized":
+ err = cluster.LoginReauthorized_tx(ctx, tx, false, e.Target.LoginId)
+ case "loginReauthorizedAll":
+ err = cluster.LoginReauthorizedAll_tx(ctx, tx, false)
+ case "masterAssigned":
+ var p types.ClusterEventMasterAssigned
+ if err := json.Unmarshal(jsonPayload, &p); err != nil {
+ return err
+ }
+ err = cluster.MasterAssigned(p.State)
+ case "schemaChanged":
+ var moduleIds []uuid.UUID
+ if err := json.Unmarshal(jsonPayload, &moduleIds); err != nil {
+ return err
}
- if err != nil {
+ err = cluster.SchemaChanged_tx(ctx, tx, false, moduleIds)
+ case "tasksChanged":
+ err = cluster.TasksChanged_tx(ctx, tx, false)
+ case "taskTriggered":
+ var p types.ClusterEventTaskTriggered
+ if err := json.Unmarshal(jsonPayload, &p); err != nil {
return err
}
+ runTaskDirectly(p.TaskName, p.PgFunctionId, p.PgFunctionScheduleId)
+ case "shutdownTriggered":
+ OsExit <- syscall.SIGTERM
}
- return nil
+ return err
}
diff --git a/scheduler/scheduler_systemMsg.go b/scheduler/scheduler_systemMsg.go
new file mode 100644
index 00000000..d0742e13
--- /dev/null
+++ b/scheduler/scheduler_systemMsg.go
@@ -0,0 +1,43 @@
+package scheduler
+
+import (
+ "context"
+ "r3/cluster"
+ "r3/config"
+ "r3/db"
+ "r3/tools"
+)
+
+// switch to maintenance mode after system message expired
+// if feature is enabled and system is not already in maintenance mode
+func systemMsgMaintenance() error {
+ date1 := config.GetUint64("systemMsgDate1")
+ now := uint64(tools.GetTimeUnix())
+ switchToMaintenance := config.GetUint64("systemMsgMaintenance") == 1
+ systemInMaintenance := config.GetUint64("productionMode") == 0
+
+ if date1 != 0 && date1 < now && switchToMaintenance && !systemInMaintenance {
+ ctx, ctxCanc := context.WithTimeout(context.Background(), db.CtxDefTimeoutSysTask)
+ defer ctxCanc()
+
+ tx, err := db.Pool.Begin(ctx)
+ if err != nil {
+ return err
+ }
+ defer tx.Rollback(ctx)
+
+ if err := config.SetUint64_tx(ctx, tx, "systemMsgMaintenance", 0); err != nil {
+ return err
+ }
+ if err := config.SetUint64_tx(ctx, tx, "productionMode", 0); err != nil {
+ return err
+ }
+ if err := cluster.ConfigChanged_tx(ctx, tx, true, false, true); err != nil {
+ return err
+ }
+ if err := tx.Commit(ctx); err != nil {
+ return err
+ }
+ }
+ return nil
+}
diff --git a/scheduler/scheduler_update.go b/scheduler/scheduler_update.go
index 6e5f6380..cc1ea6da 100644
--- a/scheduler/scheduler_update.go
+++ b/scheduler/scheduler_update.go
@@ -1,6 +1,7 @@
package scheduler
import (
+ "context"
"encoding/json"
"fmt"
"io"
@@ -8,7 +9,6 @@ import (
"r3/config"
"r3/db"
"r3/log"
- "time"
)
func updateCheck() error {
@@ -16,13 +16,13 @@ func updateCheck() error {
var check struct {
Version string `json:"version"`
}
- appVersion, _, _, _ := config.GetAppVersions()
- url := fmt.Sprintf("%s?old=%s", config.GetString("updateCheckUrl"), appVersion)
+ url := fmt.Sprintf("%s?old=%s", config.GetString("updateCheckUrl"), config.GetAppVersion().Full)
log.Info("server", fmt.Sprintf("starting update check at '%s'", url))
- httpClient := http.Client{
- Timeout: time.Second * 10,
+ httpClient, err := config.GetHttpClient(false, 10)
+ if err != nil {
+ return err
}
httpReq, err := http.NewRequest(http.MethodGet, url, nil)
@@ -45,16 +45,19 @@ func updateCheck() error {
return err
}
- tx, err := db.Pool.Begin(db.Ctx)
+ ctx, ctxCanc := context.WithTimeout(context.Background(), db.CtxDefTimeoutSysTask)
+ defer ctxCanc()
+
+ tx, err := db.Pool.Begin(ctx)
if err != nil {
return err
}
- defer tx.Rollback(db.Ctx)
+ defer tx.Rollback(ctx)
- if err := config.SetString_tx(tx, "updateCheckVersion", check.Version); err != nil {
+ if err := config.SetString_tx(ctx, tx, "updateCheckVersion", check.Version); err != nil {
return err
}
- if err := tx.Commit(db.Ctx); err != nil {
+ if err := tx.Commit(ctx); err != nil {
return err
}
diff --git a/schema/api/api.go b/schema/api/api.go
index 0769c33e..cd286577 100644
--- a/schema/api/api.go
+++ b/schema/api/api.go
@@ -1,9 +1,9 @@
package api
import (
+ "context"
"errors"
"fmt"
- "r3/db"
"r3/db/check"
"r3/schema"
"r3/schema/column"
@@ -15,9 +15,9 @@ import (
"github.com/jackc/pgx/v5"
)
-func Copy_tx(tx pgx.Tx, id uuid.UUID) error {
+func Copy_tx(ctx context.Context, tx pgx.Tx, id uuid.UUID) error {
- apis, err := Get(uuid.Nil, id)
+ apis, err := Get_tx(ctx, tx, uuid.Nil, id)
if err != nil {
return err
}
@@ -28,7 +28,7 @@ func Copy_tx(tx pgx.Tx, id uuid.UUID) error {
api := apis[0]
// get new version number (latest + 1)
- if err := tx.QueryRow(db.Ctx, `
+ if err := tx.QueryRow(ctx, `
SELECT MAX(version) + 1
FROM app.api
WHERE module_id = $1
@@ -53,15 +53,15 @@ func Copy_tx(tx pgx.Tx, id uuid.UUID) error {
if err != nil {
return err
}
- return Set_tx(tx, api)
+ return Set_tx(ctx, tx, api)
}
-func Del_tx(tx pgx.Tx, id uuid.UUID) error {
- _, err := tx.Exec(db.Ctx, `DELETE FROM app.api WHERE id = $1`, id)
+func Del_tx(ctx context.Context, tx pgx.Tx, id uuid.UUID) error {
+ _, err := tx.Exec(ctx, `DELETE FROM app.api WHERE id = $1`, id)
return err
}
-func Get(moduleId uuid.UUID, id uuid.UUID) ([]types.Api, error) {
+func Get_tx(ctx context.Context, tx pgx.Tx, moduleId uuid.UUID, id uuid.UUID) ([]types.Api, error) {
apis := make([]types.Api, 0)
sqlWheres := []string{}
@@ -75,7 +75,7 @@ func Get(moduleId uuid.UUID, id uuid.UUID) ([]types.Api, error) {
sqlValues = append(sqlValues, id)
}
- rows, err := db.Pool.Query(db.Ctx, fmt.Sprintf(`
+ rows, err := tx.Query(ctx, fmt.Sprintf(`
SELECT id, module_id, name, comment, has_delete, has_get,
has_post, limit_def, limit_max, verbose_def, version
FROM app.api
@@ -101,11 +101,11 @@ func Get(moduleId uuid.UUID, id uuid.UUID) ([]types.Api, error) {
// collect query and columns
for i, a := range apis {
- a.Query, err = query.Get("api", a.Id, 0, 0)
+ a.Query, err = query.Get_tx(ctx, tx, "api", a.Id, 0, 0, 0)
if err != nil {
return apis, err
}
- a.Columns, err = column.Get("api", a.Id)
+ a.Columns, err = column.Get_tx(ctx, tx, "api", a.Id)
if err != nil {
return apis, err
}
@@ -114,19 +114,19 @@ func Get(moduleId uuid.UUID, id uuid.UUID) ([]types.Api, error) {
return apis, nil
}
-func Set_tx(tx pgx.Tx, api types.Api) error {
+func Set_tx(ctx context.Context, tx pgx.Tx, api types.Api) error {
if err := check.DbIdentifier(api.Name); err != nil {
return err
}
- known, err := schema.CheckCreateId_tx(tx, &api.Id, "api", "id")
+ known, err := schema.CheckCreateId_tx(ctx, tx, &api.Id, "api", "id")
if err != nil {
return err
}
if known {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
UPDATE app.api
SET name = $1, comment = $2, has_delete = $3, has_get = $4,
has_post = $5, limit_def = $6, limit_max = $7, verbose_def = $8,
@@ -138,7 +138,7 @@ func Set_tx(tx pgx.Tx, api types.Api) error {
return err
}
} else {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
INSERT INTO app.api (id, module_id, name, comment, has_delete,
has_get, has_post, limit_def, limit_max, verbose_def, version)
VALUES ($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11)
@@ -148,8 +148,8 @@ func Set_tx(tx pgx.Tx, api types.Api) error {
return err
}
}
- if err := query.Set_tx(tx, "api", api.Id, 0, 0, api.Query); err != nil {
+ if err := query.Set_tx(ctx, tx, "api", api.Id, 0, 0, 0, api.Query); err != nil {
return err
}
- return column.Set_tx(tx, "api", api.Id, api.Columns)
+ return column.Set_tx(ctx, tx, "api", api.Id, api.Columns)
}
diff --git a/schema/article/article.go b/schema/article/article.go
index df3a0e32..dd0bee9c 100644
--- a/schema/article/article.go
+++ b/schema/article/article.go
@@ -1,8 +1,8 @@
package article
import (
+ "context"
"errors"
- "r3/db"
"r3/schema"
"r3/schema/caption"
"r3/types"
@@ -11,17 +11,17 @@ import (
"github.com/jackc/pgx/v5"
)
-func Assign_tx(tx pgx.Tx, target string, targetId uuid.UUID, articleIds []uuid.UUID) error {
+func Assign_tx(ctx context.Context, tx pgx.Tx, target string, targetId uuid.UUID, articleIds []uuid.UUID) error {
switch target {
case "form":
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
DELETE FROM app.article_form
WHERE form_id = $1
`, targetId); err != nil {
return err
}
for i, articleId := range articleIds {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
INSERT INTO app.article_form (article_id, form_id, position)
VALUES ($1, $2, $3)
`, articleId, targetId, i); err != nil {
@@ -29,14 +29,14 @@ func Assign_tx(tx pgx.Tx, target string, targetId uuid.UUID, articleIds []uuid.U
}
}
case "module":
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
DELETE FROM app.article_help
WHERE module_id = $1
`, targetId); err != nil {
return err
}
for i, articleId := range articleIds {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
INSERT INTO app.article_help (article_id, module_id, position)
VALUES ($1, $2, $3)
`, articleId, targetId, i); err != nil {
@@ -49,19 +49,19 @@ func Assign_tx(tx pgx.Tx, target string, targetId uuid.UUID, articleIds []uuid.U
return nil
}
-func Del_tx(tx pgx.Tx, id uuid.UUID) error {
- _, err := tx.Exec(db.Ctx, `
+func Del_tx(ctx context.Context, tx pgx.Tx, id uuid.UUID) error {
+ _, err := tx.Exec(ctx, `
DELETE FROM app.article
WHERE id = $1
`, id)
return err
}
-func Get(moduleId uuid.UUID) ([]types.Article, error) {
+func Get_tx(ctx context.Context, tx pgx.Tx, moduleId uuid.UUID) ([]types.Article, error) {
articles := make([]types.Article, 0)
- rows, err := db.Pool.Query(db.Ctx, `
+ rows, err := tx.Query(ctx, `
SELECT id, name
FROM app.article
WHERE module_id = $1
@@ -70,42 +70,39 @@ func Get(moduleId uuid.UUID) ([]types.Article, error) {
if err != nil {
return articles, err
}
+ defer rows.Close()
for rows.Next() {
var a types.Article
if err := rows.Scan(&a.Id, &a.Name); err != nil {
- rows.Close()
return articles, err
}
a.ModuleId = moduleId
articles = append(articles, a)
}
- rows.Close()
- // get title/body captions
for i, a := range articles {
- a.Captions, err = caption.Get("article", a.Id, []string{"articleBody", "articleTitle"})
+ articles[i].Captions, err = caption.Get_tx(ctx, tx, "article", a.Id, []string{"articleBody", "articleTitle"})
if err != nil {
return articles, err
}
- articles[i] = a
}
return articles, nil
}
-func Set_tx(tx pgx.Tx, moduleId uuid.UUID, id uuid.UUID, name string, captions types.CaptionMap) error {
+func Set_tx(ctx context.Context, tx pgx.Tx, moduleId uuid.UUID, id uuid.UUID, name string, captions types.CaptionMap) error {
if name == "" {
return errors.New("missing name")
}
- known, err := schema.CheckCreateId_tx(tx, &id, "article", "id")
+ known, err := schema.CheckCreateId_tx(ctx, tx, &id, "article", "id")
if err != nil {
return err
}
if known {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
UPDATE app.article
SET name = $1
WHERE id = $2
@@ -113,7 +110,7 @@ func Set_tx(tx pgx.Tx, moduleId uuid.UUID, id uuid.UUID, name string, captions t
return err
}
} else {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
INSERT INTO app.article (id, module_id, name)
VALUES ($1,$2,$3)
`, id, moduleId, name); err != nil {
@@ -122,5 +119,5 @@ func Set_tx(tx pgx.Tx, moduleId uuid.UUID, id uuid.UUID, name string, captions t
}
// set captions
- return caption.Set_tx(tx, id, captions)
+ return caption.Set_tx(ctx, tx, id, captions)
}
diff --git a/schema/attribute/attribute.go b/schema/attribute/attribute.go
index 4b0cd6ec..da9e2dba 100644
--- a/schema/attribute/attribute.go
+++ b/schema/attribute/attribute.go
@@ -1,17 +1,17 @@
package attribute
import (
+ "context"
"errors"
"fmt"
- "r3/compatible"
- "r3/db"
"r3/db/check"
"r3/schema"
"r3/schema/caption"
+ "r3/schema/compatible"
"r3/schema/pgFunction"
"r3/schema/pgIndex"
- "r3/tools"
"r3/types"
+ "slices"
"github.com/gofrs/uuid"
"github.com/jackc/pgx/v5"
@@ -19,37 +19,37 @@ import (
)
var contentTypes = []string{"integer", "bigint", "numeric", "real",
- "double precision", "varchar", "text", "boolean", "uuid", "1:1",
- "n:1", "files"}
+ "double precision", "varchar", "text", "boolean", "regconfig", "uuid",
+ "1:1", "n:1", "files"}
-var contentUseTypes = []string{"default", "textarea",
- "richtext", "date", "datetime", "time", "color"}
+var contentUseTypes = []string{"default", "textarea", "richtext",
+ "date", "datetime", "time", "color", "iframe", "drawing", "barcode"}
var fkBreakActions = []string{"NO ACTION", "RESTRICT", "CASCADE", "SET NULL",
"SET DEFAULT"}
-func Del_tx(tx pgx.Tx, id uuid.UUID) error {
+func Del_tx(ctx context.Context, tx pgx.Tx, id uuid.UUID) error {
- moduleName, relationName, name, content, err := schema.GetAttributeDetailsById_tx(tx, id)
+ moduleName, relationName, name, content, err := schema.GetAttributeDetailsById_tx(ctx, tx, id)
if err != nil {
return err
}
// delete FK index if relationship attribute
if schema.IsContentRelationship(content) {
- if err := pgIndex.DelAutoFkiForAttribute_tx(tx, id); err != nil {
+ if err := pgIndex.DelAutoFkiForAttribute_tx(ctx, tx, id); err != nil {
return err
}
}
// delete attribute database entities
if schema.IsContentFiles(content) {
- if err := FileRelationsDelete_tx(tx, id); err != nil {
+ if err := FileRelationsDelete_tx(ctx, tx, id); err != nil {
return err
}
} else {
// DROP COLUMN removes constraints if there
- if _, err := tx.Exec(db.Ctx, fmt.Sprintf(`
+ if _, err := tx.Exec(ctx, fmt.Sprintf(`
ALTER TABLE "%s"."%s"
DROP COLUMN "%s"
`, moduleName, relationName, name)); err != nil {
@@ -58,19 +58,19 @@ func Del_tx(tx pgx.Tx, id uuid.UUID) error {
}
// delete attribute reference
- _, err = tx.Exec(db.Ctx, `DELETE FROM app.attribute WHERE id = $1`, id)
+ _, err = tx.Exec(ctx, `DELETE FROM app.attribute WHERE id = $1`, id)
return err
}
-func Get(relationId uuid.UUID) ([]types.Attribute, error) {
+func Get_tx(ctx context.Context, tx pgx.Tx, relationId uuid.UUID) ([]types.Attribute, error) {
var onUpdateNull pgtype.Text
var onDeleteNull pgtype.Text
attributes := make([]types.Attribute, 0)
- rows, err := db.Pool.Query(db.Ctx, `
+ rows, err := tx.Query(ctx, `
SELECT id, relationship_id, icon_id, name, content, content_use,
- length, nullable, encrypted, def, on_update, on_delete
+ length, length_fract, nullable, encrypted, def, on_update, on_delete
FROM app.attribute
WHERE relation_id = $1
ORDER BY CASE WHEN name = 'id' THEN 0 END, name ASC
@@ -78,11 +78,12 @@ func Get(relationId uuid.UUID) ([]types.Attribute, error) {
if err != nil {
return attributes, err
}
+ defer rows.Close()
for rows.Next() {
var atr types.Attribute
- if err := rows.Scan(&atr.Id, &atr.RelationshipId, &atr.IconId,
- &atr.Name, &atr.Content, &atr.ContentUse, &atr.Length, &atr.Nullable,
+ if err := rows.Scan(&atr.Id, &atr.RelationshipId, &atr.IconId, &atr.Name,
+ &atr.Content, &atr.ContentUse, &atr.Length, &atr.LengthFract, &atr.Nullable,
&atr.Encrypted, &atr.Def, &onUpdateNull, &onDeleteNull); err != nil {
return attributes, err
@@ -92,55 +93,48 @@ func Get(relationId uuid.UUID) ([]types.Attribute, error) {
atr.RelationId = relationId
attributes = append(attributes, atr)
}
- rows.Close()
- // get captions
for i, atr := range attributes {
- atr.Captions, err = caption.Get("attribute", atr.Id, []string{"attributeTitle"})
+ attributes[i].Captions, err = caption.Get_tx(ctx, tx, "attribute", atr.Id, []string{"attributeTitle"})
if err != nil {
return attributes, err
}
- attributes[i] = atr
}
return attributes, nil
}
-func Set_tx(tx pgx.Tx, relationId uuid.UUID, id uuid.UUID,
- relationshipId pgtype.UUID, iconId pgtype.UUID, name string,
- content string, contentUse string, length int, nullable bool,
- encrypted bool, def string, onUpdate string, onDelete string,
- captions types.CaptionMap) error {
+func Set_tx(ctx context.Context, tx pgx.Tx, atr types.Attribute) error {
- if err := checkName(name); err != nil {
+ if err := check.DbIdentifier(atr.Name); err != nil {
return err
}
// fix imports < 3.3: Empty content use
- contentUse = compatible.FixAttributeContentUse(contentUse)
+ atr.ContentUse = compatible.FixAttributeContentUse(atr.ContentUse)
- if encrypted && content != "text" {
+ if atr.Encrypted && atr.Content != "text" {
return fmt.Errorf("only text attributes can be encrypted")
}
- if !tools.StringInSlice(content, contentTypes) {
- return fmt.Errorf("invalid attribute content type '%s'", content)
+ if !slices.Contains(contentTypes, atr.Content) {
+ return fmt.Errorf("invalid attribute content type '%s'", atr.Content)
}
- if !tools.StringInSlice(contentUse, contentUseTypes) {
- return fmt.Errorf("invalid attribute content use type '%s'", contentUse)
+ if !slices.Contains(contentUseTypes, atr.ContentUse) {
+ return fmt.Errorf("invalid attribute content use type '%s'", atr.ContentUse)
}
- _, moduleName, err := schema.GetModuleDetailsByRelationId_tx(tx, relationId)
+ _, moduleName, err := schema.GetModuleDetailsByRelationId_tx(ctx, tx, atr.RelationId)
if err != nil {
return err
}
- relationName, relEncryption, err := schema.GetRelationDetailsById_tx(tx, relationId)
+ relationName, relEncryption, err := schema.GetRelationDetailsById_tx(ctx, tx, atr.RelationId)
if err != nil {
return err
}
- isNew := id == uuid.Nil
- isRel := schema.IsContentRelationship(content)
- isFiles := schema.IsContentFiles(content)
- known, err := schema.CheckCreateId_tx(tx, &id, "attribute", "id")
+ isNew := atr.Id == uuid.Nil
+ isRel := schema.IsContentRelationship(atr.Content)
+ isFiles := schema.IsContentFiles(atr.Content)
+ known, err := schema.CheckCreateId_tx(ctx, tx, &atr.Id, "attribute", "id")
if err != nil {
return err
}
@@ -150,13 +144,13 @@ func Set_tx(tx pgx.Tx, relationId uuid.UUID, id uuid.UUID,
var onDeleteNull = pgtype.Text{}
if isRel {
- onUpdateNull.String = onUpdate
+ onUpdateNull.String = atr.OnUpdate
onUpdateNull.Valid = true
- onDeleteNull.String = onDelete
+ onDeleteNull.String = atr.OnDelete
onDeleteNull.Valid = true
} else {
- onUpdate = ""
- onDelete = ""
+ atr.OnUpdate = ""
+ atr.OnDelete = ""
}
if known {
@@ -164,25 +158,26 @@ func Set_tx(tx pgx.Tx, relationId uuid.UUID, id uuid.UUID,
var nameEx string
var contentEx string
var lengthEx int
+ var lengthFractEx int
var nullableEx bool
var defEx string
var onUpdateEx pgtype.Text
var onDeleteEx pgtype.Text
var relationshipIdEx pgtype.UUID
- if err := tx.QueryRow(db.Ctx, `
- SELECT name, content, length, nullable, def,
- on_update, on_delete, relationship_id
+ if err := tx.QueryRow(ctx, `
+ SELECT name, content, length, length_fract, nullable,
+ def, on_update, on_delete, relationship_id
FROM app.attribute
WHERE id = $1
- `, id).Scan(&nameEx, &contentEx, &lengthEx, &nullableEx, &defEx,
- &onUpdateEx, &onDeleteEx, &relationshipIdEx); err != nil {
+ `, atr.Id).Scan(&nameEx, &contentEx, &lengthEx, &lengthFractEx, &nullableEx,
+ &defEx, &onUpdateEx, &onDeleteEx, &relationshipIdEx); err != nil {
return err
}
// check for primary key attribute
- if nameEx == schema.PkName && (name != nameEx || length != lengthEx ||
- nullable != nullableEx || def != defEx) {
+ if nameEx == schema.PkName && (atr.Name != nameEx || atr.Length != lengthEx ||
+ atr.LengthFract != lengthFractEx || atr.Nullable != nullableEx || atr.Def != defEx) {
return errors.New("primary key attribute may only update: content, title")
}
@@ -195,44 +190,47 @@ func Set_tx(tx pgx.Tx, relationId uuid.UUID, id uuid.UUID,
case "integer": // keep integer or upgrade to bigint
fallthrough
case "bigint": // keep bigint or downgrade to integer
- contentUpdateOk = tools.StringInSlice(content, []string{"integer", "bigint"})
+ contentUpdateOk = slices.Contains([]string{"integer", "bigint"}, atr.Content)
case "numeric": // keep numeric
- contentUpdateOk = content == "numeric"
+ contentUpdateOk = atr.Content == "numeric"
case "real": // keep real or upgrade to double
fallthrough
case "double precision": // keep double or downgrade to real
- contentUpdateOk = tools.StringInSlice(content, []string{"real", "double precision"})
+ contentUpdateOk = slices.Contains([]string{"real", "double precision"}, atr.Content)
case "varchar": // keep varchar or upgrade to text
fallthrough
case "text": // keep text or downgrade to varchar
- contentUpdateOk = tools.StringInSlice(content, []string{"varchar", "text"})
+ contentUpdateOk = slices.Contains([]string{"varchar", "text"}, atr.Content)
case "boolean": // keep boolean
- contentUpdateOk = content == "boolean"
+ contentUpdateOk = atr.Content == "boolean"
+
+ case "regconfig": // keep regconfig
+ contentUpdateOk = atr.Content == "regconfig"
case "uuid": // keep UUID
- contentUpdateOk = content == "uuid"
+ contentUpdateOk = atr.Content == "uuid"
case "1:1": // keep 1:1 or switch to n:1
fallthrough
case "n:1": // keep n:1 or switch to 1:1
- contentUpdateOk = tools.StringInSlice(content, []string{"1:1", "n:1"})
+ contentUpdateOk = slices.Contains([]string{"1:1", "n:1"}, atr.Content)
case "files": // keep files
- contentUpdateOk = content == "files"
+ contentUpdateOk = atr.Content == "files"
}
if !contentUpdateOk {
- return fmt.Errorf("'%s' and '%s' are not compatible types", contentEx, content)
+ return fmt.Errorf("'%s' and '%s' are not compatible types", contentEx, atr.Content)
}
// do not allow relationship target change
// if data exists, IDs will not match new target relation
// if data does not exist, attribute can be recreated with new target relation instead
- if relationshipIdEx.Valid && relationshipIdEx.Bytes != relationshipId.Bytes {
+ if relationshipIdEx.Valid && relationshipIdEx.Bytes != atr.RelationshipId.Bytes {
return fmt.Errorf("cannot change relationship target for existing attribute")
}
@@ -241,15 +239,16 @@ func Set_tx(tx pgx.Tx, relationId uuid.UUID, id uuid.UUID,
// update attribute name
// must happen first, as other statements refer to new attribute name
- if nameEx != name {
- if err := setName_tx(tx, id, name, false, isFiles); err != nil {
+ if nameEx != atr.Name {
+ if err := setName_tx(ctx, tx, atr.Id, atr.Name, false, isFiles); err != nil {
return err
}
}
// update attribute column definition (not for files attributes: no column)
- if !isFiles && (contentEx != content || nullableEx != nullable || defEx != def ||
- (content == "varchar" && lengthEx != length)) {
+ if !isFiles && (contentEx != atr.Content || nullableEx != atr.Nullable || defEx != atr.Def ||
+ (atr.Content == "varchar" && lengthEx != atr.Length) ||
+ (atr.Content == "numeric" && (lengthEx != atr.Length || lengthFractEx != atr.LengthFract))) {
// handle relationship attribute
var contentRel string
@@ -257,65 +256,65 @@ func Set_tx(tx pgx.Tx, relationId uuid.UUID, id uuid.UUID,
if isRel {
// rebuild foreign key index if content changed (as in 1:1 -> n:1)
// this also adds/removes unique constraint, if required
- if content != contentEx {
- if err := pgIndex.DelAutoFkiForAttribute_tx(tx, id); err != nil {
+ if atr.Content != contentEx {
+ if err := pgIndex.DelAutoFkiForAttribute_tx(ctx, tx, atr.Id); err != nil {
return err
}
- if err := pgIndex.SetAutoFkiForAttribute_tx(tx, relationId, id, (content == "1:1")); err != nil {
+ if err := pgIndex.SetAutoFkiForAttribute_tx(ctx, tx, atr.RelationId, atr.Id, (atr.Content == "1:1")); err != nil {
return err
}
}
- contentRel, err = schema.GetAttributeContentByRelationPk_tx(tx, relationshipId.Bytes)
+ contentRel, err = schema.GetAttributeContentByRelationPk_tx(ctx, tx, atr.RelationshipId.Bytes)
if err != nil {
return err
}
}
// column definition
- columnDef, err := getContentColumnDefinition(content, length, contentRel)
+ columnDef, err := getContentColumnDefinition(atr.Content, atr.Length, atr.LengthFract, contentRel)
if err != nil {
return err
}
// nullable definition
nullableDef := "DROP NOT NULL"
- if !nullable {
+ if !atr.Nullable {
nullableDef = "SET NOT NULL"
}
// default definition
defaultDef := "DROP DEFAULT"
- if def != "" {
- if schema.IsContentText(content) {
+ if atr.Def != "" {
+ if schema.IsContentText(atr.Content) {
// add quotes around default value for text
- defaultDef = fmt.Sprintf("SET DEFAULT '%s'", def)
+ defaultDef = fmt.Sprintf("SET DEFAULT '%s'", atr.Def)
} else {
- defaultDef = fmt.Sprintf("SET DEFAULT %s", def)
+ defaultDef = fmt.Sprintf("SET DEFAULT %s", atr.Def)
}
}
- if _, err := tx.Exec(db.Ctx, fmt.Sprintf(`
+ if _, err := tx.Exec(ctx, fmt.Sprintf(`
ALTER TABLE "%s"."%s"
ALTER COLUMN "%s" TYPE %s,
ALTER COLUMN "%s" %s,
ALTER COLUMN "%s" %s
`, moduleName, relationName,
- name, columnDef,
- name, nullableDef,
- name, defaultDef)); err != nil {
+ atr.Name, columnDef,
+ atr.Name, nullableDef,
+ atr.Name, defaultDef)); err != nil {
return err
}
}
// update onUpdate / onDelete for relationship attributes
- if (onUpdateEx.String != onUpdate || onDeleteEx.String != onDelete) && isRel {
+ if (onUpdateEx.String != atr.OnUpdate || onDeleteEx.String != atr.OnDelete) && isRel {
- if err := deleteFK_tx(tx, moduleName, relationName, id); err != nil {
+ if err := deleteFK_tx(ctx, tx, moduleName, relationName, atr.Id); err != nil {
return err
}
- if err := createFK_tx(tx, moduleName, relationName, id, name,
- relationshipId.Bytes, onUpdate, onDelete); err != nil {
+ if err := createFK_tx(ctx, tx, moduleName, relationName, atr.Id, atr.Name,
+ atr.RelationshipId.Bytes, atr.OnUpdate, atr.OnDelete); err != nil {
return err
}
@@ -323,109 +322,112 @@ func Set_tx(tx pgx.Tx, relationId uuid.UUID, id uuid.UUID,
// update attribute reference
// encrypted option cannot be updated
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
UPDATE app.attribute
- SET icon_id = $1, content = $2, content_use = $3, length = $4,
- nullable = $5, def = $6, on_update = $7, on_delete = $8
- WHERE id = $9
- `, iconId, content, contentUse, length, nullable,
- def, onUpdateNull, onDeleteNull, id); err != nil {
+ SET icon_id = $1, content = $2, content_use = $3, length = $4, length_fract = $5,
+ nullable = $6, def = $7, on_update = $8, on_delete = $9
+ WHERE id = $10
+ `, atr.IconId, atr.Content, atr.ContentUse, atr.Length, atr.LengthFract, atr.Nullable,
+ atr.Def, onUpdateNull, onDeleteNull, atr.Id); err != nil {
return err
}
// update PK characteristics, if PK attribute
- if name == schema.PkName && content != contentEx {
- if err := updatePK_tx(tx, moduleName, relationName, relationId, content); err != nil {
+ if atr.Name == schema.PkName && atr.Content != contentEx {
+ if err := updatePK_tx(ctx, tx, moduleName, relationName, atr.RelationId, atr.Content); err != nil {
return err
}
- if err := updateReferingFKs_tx(tx, relationId, content); err != nil {
+ if err := updateReferingFKs_tx(ctx, tx, atr.RelationId, atr.Content); err != nil {
return err
}
}
} else {
// create attribute column (files attribute have no column)
if isFiles {
- if err := fileRelationsCreate_tx(tx, id, moduleName, relationName); err != nil {
+ if err := fileRelationsCreate_tx(ctx, tx, atr.Id, moduleName, relationName); err != nil {
return err
}
} else {
// check relationship target if relationship attribute
var contentRel string
if isRel {
- if !relationshipId.Valid {
+ if !atr.RelationshipId.Valid {
return fmt.Errorf("relationship requires valid target")
}
- contentRel, err = schema.GetAttributeContentByRelationPk_tx(tx, relationshipId.Bytes)
+ contentRel, err = schema.GetAttributeContentByRelationPk_tx(ctx, tx, atr.RelationshipId.Bytes)
if err != nil {
return err
}
- } else if relationshipId.Valid {
+ } else if atr.RelationshipId.Valid {
return errors.New("cannot define non-relationship with relationship target")
}
// column definition
- columnDef, err := getContentColumnDefinition(content, length, contentRel)
+ columnDef, err := getContentColumnDefinition(atr.Content, atr.Length, atr.LengthFract, contentRel)
if err != nil {
return err
}
// nullable definition
nullableDef := ""
- if !nullable {
+ if !atr.Nullable {
nullableDef = "NOT NULL"
}
// default definition
defaultDef := ""
- if def != "" {
- if schema.IsContentText(content) {
+ if atr.Def != "" {
+ if schema.IsContentText(atr.Content) {
// add quotes around default value for text
- defaultDef = fmt.Sprintf("DEFAULT '%s'", def)
+ defaultDef = fmt.Sprintf("DEFAULT '%s'", atr.Def)
} else {
- defaultDef = fmt.Sprintf("DEFAULT %s", def)
+ defaultDef = fmt.Sprintf("DEFAULT %s", atr.Def)
}
}
// add attribute to relation
- if _, err := tx.Exec(db.Ctx, fmt.Sprintf(`
+ if _, err := tx.Exec(ctx, fmt.Sprintf(`
ALTER TABLE "%s"."%s"
ADD COLUMN "%s" %s %s %s
- `, moduleName, relationName, name, columnDef, nullableDef, defaultDef)); err != nil {
+ `, moduleName, relationName, atr.Name, columnDef, nullableDef, defaultDef)); err != nil {
return err
}
}
// insert attribute reference
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
INSERT INTO app.attribute (id, relation_id, relationship_id,
- icon_id, name, content, content_use, length, nullable,
- encrypted, def, on_update, on_delete)
- VALUES ($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13)
- `, id, relationId, relationshipId, iconId, name, content, contentUse,
- length, nullable, encrypted, def, onUpdateNull, onDeleteNull); err != nil {
+ icon_id, name, content, content_use, length, length_fract,
+ nullable, encrypted, def, on_update, on_delete)
+ VALUES ($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13,$14)
+ `, atr.Id, atr.RelationId, atr.RelationshipId, atr.IconId, atr.Name,
+ atr.Content, atr.ContentUse, atr.Length, atr.LengthFract, atr.Nullable,
+ atr.Encrypted, atr.Def, onUpdateNull, onDeleteNull); err != nil {
return err
}
// apply PK characteristics, if PK attribute
- if name == schema.PkName {
- if err := createPK_tx(tx, moduleName, relationName, id, relationId); err != nil {
+ if atr.Name == schema.PkName {
+ if err := createPK_tx(ctx, tx, moduleName, relationName, atr.RelationId); err != nil {
return err
}
- // create PK PG index reference
- if err := pgIndex.SetPrimaryKeyForAttribute_tx(tx, relationId, id); err != nil {
- return err
+ // create PK PG index reference for new attributes
+ if isNew {
+ if err := pgIndex.SetPrimaryKeyForAttribute_tx(ctx, tx, atr.RelationId, atr.Id); err != nil {
+ return err
+ }
}
// create table for encrypted record keys if relation supports encryption
if relEncryption {
- tName := schema.GetEncKeyTableName(relationId)
+ tName := schema.GetEncKeyTableName(atr.RelationId)
- if _, err := tx.Exec(db.Ctx, fmt.Sprintf(`
+ if _, err := tx.Exec(ctx, fmt.Sprintf(`
CREATE TABLE IF NOT EXISTS instance_e2ee."%s" (
record_id bigint NOT NULL,
login_id integer NOT NULL,
@@ -457,15 +459,15 @@ func Set_tx(tx pgx.Tx, relationId uuid.UUID, id uuid.UUID,
if isRel {
// add FK constraint
- if err := createFK_tx(tx, moduleName, relationName, id, name,
- relationshipId.Bytes, onUpdate, onDelete); err != nil {
+ if err := createFK_tx(ctx, tx, moduleName, relationName, atr.Id, atr.Name,
+ atr.RelationshipId.Bytes, atr.OnUpdate, atr.OnDelete); err != nil {
return err
}
if isNew {
// add automatic FK index for new attributes
- if err := pgIndex.SetAutoFkiForAttribute_tx(tx, relationId, id,
- (content == "1:1")); err != nil {
+ if err := pgIndex.SetAutoFkiForAttribute_tx(ctx, tx, atr.RelationId,
+ atr.Id, (atr.Content == "1:1")); err != nil {
return err
}
@@ -474,31 +476,31 @@ func Set_tx(tx pgx.Tx, relationId uuid.UUID, id uuid.UUID,
}
// set captions
- return caption.Set_tx(tx, id, captions)
+ return caption.Set_tx(ctx, tx, atr.Id, atr.Captions)
}
-func setName_tx(tx pgx.Tx, id uuid.UUID, name string, ignoreNameCheck bool, isFiles bool) error {
+func setName_tx(ctx context.Context, tx pgx.Tx, id uuid.UUID, name string, ignoreNameCheck bool, isFiles bool) error {
// name check can be ignored by internal tasks, never ignore for user input
if !ignoreNameCheck {
- if err := checkName(name); err != nil {
+ if err := check.DbIdentifier(name); err != nil {
return err
}
}
- known, err := schema.CheckCreateId_tx(tx, &id, "attribute", "id")
+ known, err := schema.CheckCreateId_tx(ctx, tx, &id, "attribute", "id")
if err != nil || !known {
return err
}
- moduleName, relationName, nameEx, _, err := schema.GetAttributeDetailsById_tx(tx, id)
+ moduleName, relationName, nameEx, _, err := schema.GetAttributeDetailsById_tx(ctx, tx, id)
if err != nil {
return err
}
if nameEx != name {
if !isFiles {
- if _, err := tx.Exec(db.Ctx, fmt.Sprintf(`
+ if _, err := tx.Exec(ctx, fmt.Sprintf(`
ALTER TABLE "%s"."%s"
RENAME COLUMN "%s" TO "%s"
`, moduleName, relationName, nameEx, name)); err != nil {
@@ -506,7 +508,7 @@ func setName_tx(tx pgx.Tx, id uuid.UUID, name string, ignoreNameCheck bool, isFi
}
}
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
UPDATE app.attribute
SET name = $1
WHERE id = $2
@@ -514,20 +516,28 @@ func setName_tx(tx pgx.Tx, id uuid.UUID, name string, ignoreNameCheck bool, isFi
return err
}
- if err := pgFunction.RecreateAffectedBy_tx(tx, "attribute", id); err != nil {
+ if err := pgFunction.RecreateAffectedBy_tx(ctx, tx, "attribute", id); err != nil {
return err
}
}
return nil
}
-func getContentColumnDefinition(content string, length int, contentRel string) (string, error) {
+func getContentColumnDefinition(content string, length int, lengthFract int, contentRel string) (string, error) {
// by default content is named after column definition
columnDef := content
// special cases
switch content {
+ case "numeric":
+ if length != 0 {
+ if lengthFract != 0 {
+ columnDef = fmt.Sprintf("numeric(%d,%d)", length, lengthFract)
+ } else {
+ columnDef = fmt.Sprintf("numeric(%d)", length)
+ }
+ }
case "varchar":
if length == 0 {
return "", fmt.Errorf("varchar requires defined length")
@@ -542,21 +552,12 @@ func getContentColumnDefinition(content string, length int, contentRel string) (
return columnDef, nil
}
-func checkName(name string) error {
- // check valid DB identifier as attribute also becomes column
- if err := check.DbIdentifier(name); err != nil {
- return err
- }
- return nil
-}
-
// primary key handling
-func createPK_tx(tx pgx.Tx, moduleName string, relationName string,
- id uuid.UUID, relationId uuid.UUID) error {
+func createPK_tx(ctx context.Context, tx pgx.Tx, moduleName string, relationName string, relationId uuid.UUID) error {
// create PK sequence
// default type is BIGINT if not otherwise specified (works in all our cases)
- if _, err := tx.Exec(db.Ctx, fmt.Sprintf(`
+ if _, err := tx.Exec(ctx, fmt.Sprintf(`
CREATE SEQUENCE "%s"."%s"
`, moduleName, schema.GetSequenceName(relationId))); err != nil {
return err
@@ -566,20 +567,18 @@ func createPK_tx(tx pgx.Tx, moduleName string, relationName string,
// additional single quotes are required for nextval()
def := fmt.Sprintf(`NEXTVAL('"%s"."%s"')`, moduleName, schema.GetSequenceName(relationId))
- if _, err := tx.Exec(db.Ctx, fmt.Sprintf(`
+ _, err := tx.Exec(ctx, fmt.Sprintf(`
ALTER TABLE "%s"."%s" ALTER COLUMN "%s" SET DEFAULT %s,
ADD CONSTRAINT "%s" PRIMARY KEY ("%s")
`, moduleName, relationName, schema.PkName, def,
- schema.GetPkConstraintName(relationId), schema.PkName)); err != nil {
+ schema.GetPkConstraintName(relationId), schema.PkName))
- return err
- }
- return nil
+ return err
}
-func updatePK_tx(tx pgx.Tx, moduleName string, relationName string,
+func updatePK_tx(ctx context.Context, tx pgx.Tx, moduleName string, relationName string,
relationId uuid.UUID, content string) error {
- if _, err := tx.Exec(db.Ctx, fmt.Sprintf(`
+ if _, err := tx.Exec(ctx, fmt.Sprintf(`
ALTER TABLE "%s"."%s"
ALTER COLUMN "%s" TYPE %s,
ALTER COLUMN "%s" SET DEFAULT NEXTVAL('"%s"."%s"')
@@ -589,7 +588,7 @@ func updatePK_tx(tx pgx.Tx, moduleName string, relationName string,
return err
}
- if _, err := tx.Exec(db.Ctx, fmt.Sprintf(`
+ if _, err := tx.Exec(ctx, fmt.Sprintf(`
ALTER SEQUENCE "%s"."%s" AS %s
`, moduleName, schema.GetSequenceName(relationId), content)); err != nil {
return err
@@ -598,25 +597,25 @@ func updatePK_tx(tx pgx.Tx, moduleName string, relationName string,
}
// foreign key handling
-func createFK_tx(tx pgx.Tx, moduleName string, relationName string,
+func createFK_tx(ctx context.Context, tx pgx.Tx, moduleName string, relationName string,
attributeId uuid.UUID, attributeName string, relationshipId uuid.UUID,
onUpdate string, onDelete string) error {
- if !tools.StringInSlice(onUpdate, fkBreakActions) {
+ if !slices.Contains(fkBreakActions, onUpdate) {
return fmt.Errorf("invalid attribute ON UPDATE definition '%s'", onUpdate)
}
- if !tools.StringInSlice(onDelete, fkBreakActions) {
+ if !slices.Contains(fkBreakActions, onDelete) {
return fmt.Errorf("invalid attribute ON DELETE definition '%s'", onDelete)
}
// get relationship relation & module names
- modName, relName, err := schema.GetRelationNamesById_tx(tx, relationshipId)
+ modName, relName, err := schema.GetRelationNamesById_tx(ctx, tx, relationshipId)
if err != nil {
return err
}
// add attribute with foreign key
- if _, err := tx.Exec(db.Ctx, fmt.Sprintf(`
+ if _, err := tx.Exec(ctx, fmt.Sprintf(`
ALTER TABLE "%s"."%s"
ADD CONSTRAINT "%s"
FOREIGN KEY ("%s")
@@ -630,16 +629,16 @@ func createFK_tx(tx pgx.Tx, moduleName string, relationName string,
}
return nil
}
-func deleteFK_tx(tx pgx.Tx, moduleName string, relationName string, attributeId uuid.UUID) error {
- _, err := tx.Exec(db.Ctx, fmt.Sprintf(`
+func deleteFK_tx(ctx context.Context, tx pgx.Tx, moduleName string, relationName string, attributeId uuid.UUID) error {
+ _, err := tx.Exec(ctx, fmt.Sprintf(`
ALTER TABLE "%s"."%s"
DROP CONSTRAINT "%s"
`, moduleName, relationName, schema.GetFkConstraintName(attributeId)))
return err
}
-// update all foreign keys refering to specified relation via relationship attribute
-func updateReferingFKs_tx(tx pgx.Tx, relationshipId uuid.UUID, content string) error {
+// update all foreign keys referring to specified relation via relationship attribute
+func updateReferingFKs_tx(ctx context.Context, tx pgx.Tx, relationshipId uuid.UUID, content string) error {
type update struct {
ModName string
@@ -648,7 +647,7 @@ func updateReferingFKs_tx(tx pgx.Tx, relationshipId uuid.UUID, content string) e
}
updates := make([]update, 0)
- rows, err := tx.Query(db.Ctx, `
+ rows, err := tx.Query(ctx, `
SELECT m.name, r.name, a.name
FROM app.attribute AS a
INNER JOIN app.relation AS r ON r.id = a.relation_id
@@ -658,6 +657,7 @@ func updateReferingFKs_tx(tx pgx.Tx, relationshipId uuid.UUID, content string) e
if err != nil {
return err
}
+ defer rows.Close()
for rows.Next() {
var u update
@@ -666,10 +666,9 @@ func updateReferingFKs_tx(tx pgx.Tx, relationshipId uuid.UUID, content string) e
}
updates = append(updates, u)
}
- rows.Close()
for _, u := range updates {
- if _, err := tx.Exec(db.Ctx, fmt.Sprintf(`
+ if _, err := tx.Exec(ctx, fmt.Sprintf(`
ALTER TABLE "%s"."%s"
ALTER COLUMN "%s" TYPE %s
`, u.ModName, u.RelName, u.AtrName, content)); err != nil {
diff --git a/schema/attribute/attribute_del_check.go b/schema/attribute/attribute_del_check.go
new file mode 100644
index 00000000..3b040c90
--- /dev/null
+++ b/schema/attribute/attribute_del_check.go
@@ -0,0 +1,227 @@
+package attribute
+
+import (
+ "context"
+
+ "github.com/gofrs/uuid"
+ "github.com/jackc/pgx/v5"
+)
+
+func DelCheck_tx(ctx context.Context, tx pgx.Tx, attributeId uuid.UUID) (interface{}, error) {
+
+ // dependencies we need for further checks
+ var queryIds []uuid.UUID
+ var columnIdsSubQueries []uuid.UUID
+
+ // final dependencies we send back for display
+ type depField struct {
+ FieldId uuid.UUID `json:"fieldId"`
+ FormId uuid.UUID `json:"formId"`
+ }
+ var dependencies struct {
+ ApiIds []uuid.UUID `json:"apiIds"` // attribute used in API column or query
+ CollectionIds []uuid.UUID `json:"collectionIds"` // attribute used in collection column or query
+ FormIds []uuid.UUID `json:"formIds"` // attribute used in form query
+ PgIndexIds []uuid.UUID `json:"pgIndexIds"` // attribute used in PG index
+ LoginFormNames []string `json:"loginFormNames"` // attribute used in login form (either lookup or login ID)
+
+ Fields []depField `json:"fields"` // attribute used in data field, field columns or query
+ }
+ dependencies.Fields = make([]depField, 0)
+
+ // collect affected queries
+ if err := tx.QueryRow(ctx, `
+ -- get nested children of queries
+ WITH RECURSIVE queries AS (
+ -- initial result set: all queries that include attribute in any way
+ SELECT id, query_filter_query_id
+ FROM app.query
+ WHERE id IN (
+ SELECT query_id
+ FROM app.query_order
+ WHERE attribute_id = $1
+
+ UNION
+
+ SELECT query_id
+ FROM app.query_join
+ WHERE attribute_id = $1
+
+ UNION
+
+ SELECT query_id
+ FROM app.query_filter_side
+ WHERE attribute_id = $1
+ )
+
+ UNION
+
+ -- recursive results
+ -- all parent queries up to the main element (form, field, API, collection, column)
+ SELECT c.id, c.query_filter_query_id
+ FROM app.query AS c
+ INNER JOIN queries AS q ON q.query_filter_query_id = c.id
+ )
+ SELECT ARRAY_AGG(id)
+ FROM queries
+ `, attributeId).Scan(&queryIds); err != nil {
+ return nil, err
+ }
+
+ // collect affected columns
+ if err := tx.QueryRow(ctx, `
+ SELECT ARRAY_AGG(column_id)
+ FROM app.query
+ WHERE column_id IS NOT NULL
+ AND id = ANY($1)
+ `, queryIds).Scan(&columnIdsSubQueries); err != nil {
+ return nil, err
+ }
+
+ // collect affected APIs, collections, forms, PG indexes, login forms
+ if err := tx.QueryRow(ctx, `
+ SELECT
+ ARRAY(
+ -- APIs with affected queries
+ SELECT api_id
+ FROM app.query
+ WHERE api_id IS NOT NULL
+ AND id = ANY($2)
+
+ UNION
+
+ -- APIs with affected sub query columns
+ SELECT api_id
+ FROM app.column
+ WHERE api_id IS NOT NULL
+ AND (
+ attribute_id = $1
+ OR id = ANY($3)
+ )
+ ) AS apis,
+ ARRAY(
+ -- collections with affected queries
+ SELECT collection_id
+ FROM app.query
+ WHERE collection_id IS NOT NULL
+ AND id = ANY($2)
+
+ UNION
+
+ -- collections with affected sub query columns
+ SELECT collection_id
+ FROM app.column
+ WHERE collection_id IS NOT NULL
+ AND (
+ attribute_id = $1
+ OR id = ANY($3)
+ )
+ ) AS collections,
+ ARRAY(
+ -- forms with affected queries
+ SELECT form_id
+ FROM app.query
+ WHERE form_id IS NOT NULL
+ AND id = ANY($2)
+ ) AS forms,
+ ARRAY(
+ SELECT pia.pg_index_id
+ FROM app.pg_index_attribute AS pia
+ JOIN app.pg_index AS pi ON pi.id = pia.pg_index_id
+ WHERE pia.attribute_id = $1
+ AND pi.auto_fki = false
+ AND pi.primary_key = false
+ ) AS pgIndexes,
+ ARRAY(
+ SELECT name
+ FROM app.login_form
+ WHERE attribute_id_login = $1
+ OR attribute_id_lookup = $1
+ ) AS loginForms
+ `, attributeId, queryIds, columnIdsSubQueries).Scan(
+ &dependencies.ApiIds,
+ &dependencies.CollectionIds,
+ &dependencies.FormIds,
+ &dependencies.PgIndexIds,
+ &dependencies.LoginFormNames); err != nil {
+
+ return nil, err
+ }
+
+ // collect affected fields
+ rows, err := tx.Query(ctx, `
+ SELECT frm.id, fld.id
+ FROM app.field AS fld
+ INNER JOIN app.form AS frm ON frm.id = fld.form_id
+ WHERE fld.id IN (
+ -- fields opening forms with attribute
+ SELECT field_id
+ FROM app.open_form
+ WHERE attribute_id_apply = $1
+
+ UNION
+
+ -- data fields
+ SELECT field_id
+ FROM app.field_data
+ WHERE attribute_id = $1
+ OR attribute_id_alt = $1
+
+ UNION
+
+ -- data relationship fields
+ SELECT field_id
+ FROM app.field_data_relationship
+ WHERE attribute_id_nm = $1
+
+ UNION
+
+ -- field queries
+ SELECT field_id
+ FROM app.query
+ WHERE field_id IS NOT NULL
+ AND id = ANY($2)
+
+ UNION
+
+ -- field columns
+ SELECT field_id
+ FROM app.column
+ WHERE field_id IS NOT NULL
+ AND (
+ attribute_id = $1
+ OR id = ANY($3)
+ )
+
+ UNION
+
+ -- calendar fields
+ SELECT field_id
+ FROM app.field_calendar
+ WHERE attribute_id_color = $1
+ OR attribute_id_date0 = $1
+ OR attribute_id_date1 = $1
+
+ UNION
+
+ -- kanban fields
+ SELECT field_id
+ FROM app.field_kanban
+ WHERE attribute_id_sort = $1
+ )
+ `, attributeId, queryIds, columnIdsSubQueries)
+ if err != nil {
+ return nil, err
+ }
+
+ for rows.Next() {
+ var d depField
+ if err := rows.Scan(&d.FormId, &d.FieldId); err != nil {
+ return nil, err
+ }
+ dependencies.Fields = append(dependencies.Fields, d)
+ }
+ rows.Close()
+
+ return dependencies, nil
+}
diff --git a/schema/attribute/attribute_files.go b/schema/attribute/attribute_files.go
index 6f66adf3..53331e96 100644
--- a/schema/attribute/attribute_files.go
+++ b/schema/attribute/attribute_files.go
@@ -1,20 +1,20 @@
package attribute
import (
+ "context"
"fmt"
- "r3/db"
"r3/schema"
"github.com/gofrs/uuid"
"github.com/jackc/pgx/v5"
)
-func fileRelationsCreate_tx(tx pgx.Tx, attributeId uuid.UUID,
+func fileRelationsCreate_tx(ctx context.Context, tx pgx.Tx, attributeId uuid.UUID,
moduleName string, relationName string) error {
tNameR := schema.GetFilesTableName(attributeId)
- _, err := tx.Exec(db.Ctx, fmt.Sprintf(`
+ _, err := tx.Exec(ctx, fmt.Sprintf(`
CREATE TABLE instance_file."%s" (
file_id uuid NOT NULL,
record_id bigint NOT NULL,
@@ -51,8 +51,8 @@ func fileRelationsCreate_tx(tx pgx.Tx, attributeId uuid.UUID,
return err
}
-func FileRelationsDelete_tx(tx pgx.Tx, attributeId uuid.UUID) error {
- _, err := tx.Exec(db.Ctx, fmt.Sprintf(`
+func FileRelationsDelete_tx(ctx context.Context, tx pgx.Tx, attributeId uuid.UUID) error {
+ _, err := tx.Exec(ctx, fmt.Sprintf(`
DROP TABLE instance_file."%s"
`, schema.GetFilesTableName(attributeId)))
return err
diff --git a/schema/caption/caption.go b/schema/caption/caption.go
index 1b4412a2..c4c25691 100644
--- a/schema/caption/caption.go
+++ b/schema/caption/caption.go
@@ -1,23 +1,23 @@
package caption
import (
+ "context"
"errors"
"fmt"
- "r3/db"
"r3/types"
"github.com/gofrs/uuid"
"github.com/jackc/pgx/v5"
)
-func Get(entity string, id uuid.UUID, expectedContents []string) (types.CaptionMap, error) {
+func Get_tx(ctx context.Context, tx pgx.Tx, entity string, id uuid.UUID, expectedContents []string) (types.CaptionMap, error) {
caps := make(types.CaptionMap)
for _, content := range expectedContents {
caps[content] = make(map[string]string)
}
- rows, err := db.Pool.Query(db.Ctx, fmt.Sprintf(`
+ rows, err := tx.Query(ctx, fmt.Sprintf(`
SELECT language_code, content, value
FROM app.caption
WHERE %s_id = $1
@@ -34,22 +34,26 @@ func Get(entity string, id uuid.UUID, expectedContents []string) (types.CaptionM
if err := rows.Scan(&code, &content, &value); err != nil {
return caps, err
}
+
+ if _, exists := caps[content]; !exists {
+ return caps, fmt.Errorf("caption content '%s' was unexpected", content)
+ }
caps[content][code] = value
}
return caps, nil
}
-func Set_tx(tx pgx.Tx, id uuid.UUID, captions types.CaptionMap) error {
+func Set_tx(ctx context.Context, tx pgx.Tx, id uuid.UUID, captions types.CaptionMap) error {
for content, codes := range captions {
- entityName, err := getEntityName(content)
+ entityName, err := GetEntityName(content)
if err != nil {
return err
}
// delete captions for this content
- if _, err := tx.Exec(db.Ctx, fmt.Sprintf(`
+ if _, err := tx.Exec(ctx, fmt.Sprintf(`
DELETE FROM app.caption
WHERE %s = $1
AND content = $2
@@ -64,7 +68,7 @@ func Set_tx(tx pgx.Tx, id uuid.UUID, captions types.CaptionMap) error {
continue
}
- if _, err := tx.Exec(db.Ctx, fmt.Sprintf(`
+ if _, err := tx.Exec(ctx, fmt.Sprintf(`
INSERT INTO app.caption (language_code, %s, value, content)
VALUES ($1,$2,$3,$4)
`, entityName), code, id, value, content); err != nil {
@@ -75,7 +79,86 @@ func Set_tx(tx pgx.Tx, id uuid.UUID, captions types.CaptionMap) error {
return nil
}
-func getEntityName(content string) (string, error) {
+// helpers
+func GetDefaultContent(entity string) types.CaptionMap {
+ switch entity {
+ case "article":
+ return types.CaptionMap{
+ "articleTitle": make(map[string]string),
+ "articleBody": make(map[string]string),
+ }
+ case "attribute":
+ return types.CaptionMap{
+ "attributeTitle": make(map[string]string),
+ }
+ case "clientEvent":
+ return types.CaptionMap{
+ "clientEventTitle": make(map[string]string),
+ }
+ case "column":
+ return types.CaptionMap{
+ "columnTitle": make(map[string]string),
+ }
+ case "field":
+ return types.CaptionMap{
+ "fieldTitle": make(map[string]string),
+ "fieldHelp": make(map[string]string),
+ }
+ case "form":
+ return types.CaptionMap{
+ "formTitle": make(map[string]string),
+ }
+ case "formAction":
+ return types.CaptionMap{
+ "formActionTitle": make(map[string]string),
+ }
+ case "jsFunction":
+ return types.CaptionMap{
+ "jsFunctionDesc": make(map[string]string),
+ "jsFunctionTitle": make(map[string]string),
+ }
+ case "loginForm":
+ return types.CaptionMap{
+ "loginFormTitle": make(map[string]string),
+ }
+ case "menu":
+ return types.CaptionMap{
+ "menuTitle": make(map[string]string),
+ }
+ case "menuTab":
+ return types.CaptionMap{
+ "menuTabTitle": make(map[string]string),
+ }
+ case "module":
+ return types.CaptionMap{
+ "moduleTitle": make(map[string]string),
+ }
+ case "pgFunction":
+ return types.CaptionMap{
+ "pgFunctionTitle": make(map[string]string),
+ "pgFunctionDesc": make(map[string]string),
+ }
+ case "queryChoice":
+ return types.CaptionMap{
+ "queryChoiceTitle": make(map[string]string),
+ }
+ case "role":
+ return types.CaptionMap{
+ "roleTitle": make(map[string]string),
+ "roleDesc": make(map[string]string),
+ }
+ case "tab":
+ return types.CaptionMap{
+ "tabTitle": make(map[string]string),
+ }
+ case "widget":
+ return types.CaptionMap{
+ "widgetTitle": make(map[string]string),
+ }
+ }
+ return types.CaptionMap{}
+}
+func GetEntityName(content string) (string, error) {
switch content {
@@ -85,12 +168,18 @@ func getEntityName(content string) (string, error) {
case "attributeTitle":
return "attribute_id", nil
+ case "clientEventTitle":
+ return "client_event_id", nil
+
case "columnTitle":
return "column_id", nil
case "fieldTitle", "fieldHelp":
return "field_id", nil
+ case "formActionTitle":
+ return "form_action_id", nil
+
case "formTitle", "formHelp":
return "form_id", nil
@@ -103,6 +192,9 @@ func getEntityName(content string) (string, error) {
case "menuTitle":
return "menu_id", nil
+ case "menuTabTitle":
+ return "menu_tab_id", nil
+
case "moduleTitle":
return "module_id", nil
@@ -117,6 +209,9 @@ func getEntityName(content string) (string, error) {
case "tabTitle":
return "tab_id", nil
+
+ case "widgetTitle":
+ return "widget_id", nil
}
return "", errors.New("bad caption content name")
}
diff --git a/schema/clientEvent/clientEvent.go b/schema/clientEvent/clientEvent.go
new file mode 100644
index 00000000..63d929e3
--- /dev/null
+++ b/schema/clientEvent/clientEvent.go
@@ -0,0 +1,91 @@
+package clientEvent
+
+import (
+ "context"
+ "r3/schema"
+ "r3/schema/caption"
+ "r3/types"
+
+ "github.com/gofrs/uuid"
+ "github.com/jackc/pgx/v5"
+ "github.com/jackc/pgx/v5/pgtype"
+)
+
+func Del_tx(ctx context.Context, tx pgx.Tx, id uuid.UUID) error {
+ _, err := tx.Exec(ctx, `DELETE FROM app.client_event WHERE id = $1`, id)
+ return err
+}
+
+func Get_tx(ctx context.Context, tx pgx.Tx, moduleId uuid.UUID) ([]types.ClientEvent, error) {
+
+ clientEvents := make([]types.ClientEvent, 0)
+ rows, err := tx.Query(ctx, `
+ SELECT id, action, arguments, event, hotkey_modifier1,
+ hotkey_modifier2, hotkey_char, js_function_id, pg_function_id
+ FROM app.client_event
+ WHERE module_id = $1
+ ORDER BY id ASC
+ `, moduleId)
+ if err != nil {
+ return clientEvents, err
+ }
+ defer rows.Close()
+
+ for rows.Next() {
+ var e types.ClientEvent
+ e.ModuleId = moduleId
+ if err := rows.Scan(&e.Id, &e.Action, &e.Arguments, &e.Event, &e.HotkeyModifier1,
+ &e.HotkeyModifier2, &e.HotkeyChar, &e.JsFunctionId, &e.PgFunctionId); err != nil {
+
+ return clientEvents, err
+ }
+ clientEvents = append(clientEvents, e)
+ }
+
+ for i, e := range clientEvents {
+ clientEvents[i].Captions, err = caption.Get_tx(ctx, tx, "client_event", e.Id, []string{"clientEventTitle"})
+ if err != nil {
+ return clientEvents, err
+ }
+ }
+ return clientEvents, nil
+}
+
+func Set_tx(ctx context.Context, tx pgx.Tx, ce types.ClientEvent) error {
+
+ known, err := schema.CheckCreateId_tx(ctx, tx, &ce.Id, "client_event", "id")
+ if err != nil {
+ return err
+ }
+
+ if ce.Action != "callJsFunction" {
+ ce.JsFunctionId = pgtype.UUID{}
+ }
+ if ce.Action != "callPgFunction" {
+ ce.PgFunctionId = pgtype.UUID{}
+ }
+
+ if known {
+ if _, err := tx.Exec(ctx, `
+ UPDATE app.client_event
+ SET action = $1, arguments = $2, event = $3, hotkey_modifier1 = $4,
+ hotkey_modifier2 = $5, hotkey_char = $6, js_function_id = $7, pg_function_id = $8
+ WHERE id = $9
+ `, ce.Action, ce.Arguments, ce.Event, ce.HotkeyModifier1, ce.HotkeyModifier2,
+ ce.HotkeyChar, ce.JsFunctionId, ce.PgFunctionId, ce.Id); err != nil {
+
+ return err
+ }
+ } else {
+ if _, err := tx.Exec(ctx, `
+ INSERT INTO app.client_event (id, module_id, action, arguments, event, hotkey_modifier1,
+ hotkey_modifier2, hotkey_char, js_function_id, pg_function_id)
+ VALUES ($1,$2,$3,$4,$5,$6,$7,$8,$9,$10)
+ `, ce.Id, ce.ModuleId, ce.Action, ce.Arguments, ce.Event, ce.HotkeyModifier1,
+ ce.HotkeyModifier2, ce.HotkeyChar, ce.JsFunctionId, ce.PgFunctionId); err != nil {
+
+ return err
+ }
+ }
+ return caption.Set_tx(ctx, tx, ce.Id, ce.Captions)
+}
diff --git a/schema/collection/collection.go b/schema/collection/collection.go
index a74b2ac7..b4587d10 100644
--- a/schema/collection/collection.go
+++ b/schema/collection/collection.go
@@ -1,7 +1,7 @@
package collection
import (
- "r3/db"
+ "context"
"r3/schema"
"r3/schema/collection/consumer"
"r3/schema/column"
@@ -13,15 +13,15 @@ import (
"github.com/jackc/pgx/v5/pgtype"
)
-func Del_tx(tx pgx.Tx, id uuid.UUID) error {
- _, err := tx.Exec(db.Ctx, `DELETE FROM app.collection WHERE id = $1`, id)
+func Del_tx(ctx context.Context, tx pgx.Tx, id uuid.UUID) error {
+ _, err := tx.Exec(ctx, `DELETE FROM app.collection WHERE id = $1`, id)
return err
}
-func Get(moduleId uuid.UUID) ([]types.Collection, error) {
+func Get_tx(ctx context.Context, tx pgx.Tx, moduleId uuid.UUID) ([]types.Collection, error) {
collections := make([]types.Collection, 0)
- rows, err := db.Pool.Query(db.Ctx, `
+ rows, err := tx.Query(ctx, `
SELECT id, icon_id, name
FROM app.collection
WHERE module_id = $1
@@ -44,15 +44,15 @@ func Get(moduleId uuid.UUID) ([]types.Collection, error) {
// collect query and columns
for i, c := range collections {
- c.Query, err = query.Get("collection", c.Id, 0, 0)
+ c.Query, err = query.Get_tx(ctx, tx, "collection", c.Id, 0, 0, 0)
if err != nil {
return collections, err
}
- c.Columns, err = column.Get("collection", c.Id)
+ c.Columns, err = column.Get_tx(ctx, tx, "collection", c.Id)
if err != nil {
return collections, err
}
- c.InHeader, err = consumer.Get("collection", c.Id, "headerDisplay")
+ c.InHeader, err = consumer.Get_tx(ctx, tx, "collection", c.Id, "headerDisplay")
if err != nil {
return collections, err
}
@@ -61,16 +61,16 @@ func Get(moduleId uuid.UUID) ([]types.Collection, error) {
return collections, nil
}
-func Set_tx(tx pgx.Tx, moduleId uuid.UUID, id uuid.UUID, iconId pgtype.UUID, name string,
+func Set_tx(ctx context.Context, tx pgx.Tx, moduleId uuid.UUID, id uuid.UUID, iconId pgtype.UUID, name string,
columns []types.Column, queryIn types.Query, inHeader []types.CollectionConsumer) error {
- known, err := schema.CheckCreateId_tx(tx, &id, "collection", "id")
+ known, err := schema.CheckCreateId_tx(ctx, tx, &id, "collection", "id")
if err != nil {
return err
}
if known {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
UPDATE app.collection
SET icon_id = $1, name = $2
WHERE id = $3
@@ -78,18 +78,18 @@ func Set_tx(tx pgx.Tx, moduleId uuid.UUID, id uuid.UUID, iconId pgtype.UUID, nam
return err
}
} else {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
INSERT INTO app.collection (id,icon_id,module_id,name)
VALUES ($1,$2,$3,$4)
`, id, iconId, moduleId, name); err != nil {
return err
}
}
- if err := query.Set_tx(tx, "collection", id, 0, 0, queryIn); err != nil {
+ if err := query.Set_tx(ctx, tx, "collection", id, 0, 0, 0, queryIn); err != nil {
return err
}
- if err := column.Set_tx(tx, "collection", id, columns); err != nil {
+ if err := column.Set_tx(ctx, tx, "collection", id, columns); err != nil {
return err
}
- return consumer.Set_tx(tx, "collection", id, "headerDisplay", inHeader)
+ return consumer.Set_tx(ctx, tx, "collection", id, "headerDisplay", inHeader)
}
diff --git a/schema/collection/consumer/consumer.go b/schema/collection/consumer/consumer.go
index d59bf2c3..11c5354a 100644
--- a/schema/collection/consumer/consumer.go
+++ b/schema/collection/consumer/consumer.go
@@ -1,55 +1,53 @@
package consumer
import (
+ "context"
"errors"
"fmt"
- "r3/db"
+ "r3/schema/compatible"
"r3/schema/openForm"
- "r3/tools"
"r3/types"
+ "slices"
"github.com/gofrs/uuid"
"github.com/jackc/pgx/v5"
+ "github.com/jackc/pgx/v5/pgtype"
)
-var entitiesAllowed = []string{"collection", "field", "menu"}
+var entitiesAllowed = []string{"collection", "field", "menu", "widget"}
-func GetOne(entity string, entityId uuid.UUID, content string) (types.CollectionConsumer, error) {
+func GetOne_tx(ctx context.Context, tx pgx.Tx, entity string, entityId uuid.UUID, content string) (types.CollectionConsumer, error) {
var err error
var c types.CollectionConsumer
- if !tools.StringInSlice(entity, entitiesAllowed) {
+ if !slices.Contains(entitiesAllowed, entity) {
return c, errors.New("invalid collection consumer entity")
}
- if err := db.Pool.QueryRow(db.Ctx, fmt.Sprintf(`
- SELECT id, collection_id, column_id_display,
- multi_value, no_display_empty, on_mobile
+ if err := tx.QueryRow(ctx, fmt.Sprintf(`
+ SELECT id, collection_id, column_id_display, flags, on_mobile
FROM app.collection_consumer
WHERE %s_id = $1
AND content = $2
- `, entity), entityId, content).Scan(&c.Id, &c.CollectionId, &c.ColumnIdDisplay,
- &c.MultiValue, &c.NoDisplayEmpty, &c.OnMobile); err != nil && err != pgx.ErrNoRows {
-
+ `, entity), entityId, content).Scan(&c.Id, &c.CollectionId, &c.ColumnIdDisplay, &c.Flags, &c.OnMobile); err != nil && err != pgx.ErrNoRows {
return c, err
}
- c.OpenForm, err = openForm.Get("collection_consumer", c.Id)
+ c.OpenForm, err = openForm.Get_tx(ctx, tx, "collection_consumer", c.Id, pgtype.Text{})
if err != nil {
return c, err
}
return c, nil
}
-func Get(entity string, entityId uuid.UUID, content string) ([]types.CollectionConsumer, error) {
+func Get_tx(ctx context.Context, tx pgx.Tx, entity string, entityId uuid.UUID, content string) ([]types.CollectionConsumer, error) {
var consumers = make([]types.CollectionConsumer, 0)
- if !tools.StringInSlice(entity, entitiesAllowed) {
+ if !slices.Contains(entitiesAllowed, entity) {
return consumers, errors.New("invalid collection consumer entity")
}
- rows, err := db.Pool.Query(db.Ctx, fmt.Sprintf(`
- SELECT id, collection_id, column_id_display,
- multi_value, no_display_empty, on_mobile
+ rows, err := tx.Query(ctx, fmt.Sprintf(`
+ SELECT id, collection_id, column_id_display, flags, on_mobile
FROM app.collection_consumer
WHERE %s_id = $1
AND content = $2
@@ -61,28 +59,27 @@ func Get(entity string, entityId uuid.UUID, content string) ([]types.CollectionC
for rows.Next() {
var c types.CollectionConsumer
-
- if err := rows.Scan(&c.Id, &c.CollectionId, &c.ColumnIdDisplay,
- &c.MultiValue, &c.NoDisplayEmpty, &c.OnMobile); err != nil {
-
+ if err := rows.Scan(&c.Id, &c.CollectionId, &c.ColumnIdDisplay, &c.Flags, &c.OnMobile); err != nil {
return consumers, err
}
- c.OpenForm, err = openForm.Get("collection_consumer", c.Id)
+ consumers = append(consumers, c)
+ }
+
+ for i, c := range consumers {
+ consumers[i].OpenForm, err = openForm.Get_tx(ctx, tx, "collection_consumer", c.Id, pgtype.Text{})
if err != nil {
return consumers, err
}
- consumers = append(consumers, c)
}
return consumers, nil
}
-func Set_tx(tx pgx.Tx, entity string, entityId uuid.UUID, content string,
- consumers []types.CollectionConsumer) error {
+func Set_tx(ctx context.Context, tx pgx.Tx, entity string, entityId uuid.UUID, content string, consumers []types.CollectionConsumer) error {
- if !tools.StringInSlice(entity, entitiesAllowed) {
+ if !slices.Contains(entitiesAllowed, entity) {
return errors.New("invalid collection consumer entity")
}
- if _, err := tx.Exec(db.Ctx, fmt.Sprintf(`
+ if _, err := tx.Exec(ctx, fmt.Sprintf(`
DELETE FROM app.collection_consumer
WHERE %s_id = $1
AND content = $2
@@ -96,6 +93,9 @@ func Set_tx(tx pgx.Tx, entity string, entityId uuid.UUID, content string,
continue
}
+ // fix import < 3.10: add missing flags
+ c = compatible.FixCollectionConsumerFlags(c)
+
if c.Id == uuid.Nil {
c.Id, err = uuid.NewV4()
if err != nil {
@@ -104,30 +104,21 @@ func Set_tx(tx pgx.Tx, entity string, entityId uuid.UUID, content string,
}
if entity == "collection" {
- if _, err := tx.Exec(db.Ctx, `
- INSERT INTO app.collection_consumer (id, collection_id,
- column_id_display, content, multi_value, no_display_empty,
- on_mobile)
- VALUES ($1,$2,$3,$4,$5,$6,$7)
- `, c.Id, c.CollectionId, c.ColumnIdDisplay, content, c.MultiValue,
- c.NoDisplayEmpty, c.OnMobile); err != nil {
-
+ if _, err := tx.Exec(ctx, `
+ INSERT INTO app.collection_consumer (id, collection_id, column_id_display, content, flags, on_mobile)
+ VALUES ($1,$2,$3,$4,$5,$6)
+ `, c.Id, c.CollectionId, c.ColumnIdDisplay, content, c.Flags, c.OnMobile); err != nil {
return err
}
} else {
- if _, err := tx.Exec(db.Ctx, fmt.Sprintf(`
- INSERT INTO app.collection_consumer (id, collection_id, %s_id,
- column_id_display, content, multi_value, no_display_empty,
- on_mobile)
- VALUES ($1,$2,$3,$4,$5,$6,$7,$8)
- `, entity), c.Id, c.CollectionId, entityId, c.ColumnIdDisplay, content,
- c.MultiValue, c.NoDisplayEmpty, c.OnMobile); err != nil {
-
+ if _, err := tx.Exec(ctx, fmt.Sprintf(`
+ INSERT INTO app.collection_consumer (id, collection_id, %s_id, column_id_display, content, flags, on_mobile)
+ VALUES ($1,$2,$3,$4,$5,$6,$7)
+ `, entity), c.Id, c.CollectionId, entityId, c.ColumnIdDisplay, content, c.Flags, c.OnMobile); err != nil {
return err
}
}
-
- if err := openForm.Set_tx(tx, "collection_consumer", c.Id, c.OpenForm); err != nil {
+ if err := openForm.Set_tx(ctx, tx, "collection_consumer", c.Id, c.OpenForm, pgtype.Text{}); err != nil {
return err
}
}
diff --git a/schema/column/column.go b/schema/column/column.go
index 3af0c3c0..c0fef994 100644
--- a/schema/column/column.go
+++ b/schema/column/column.go
@@ -1,15 +1,15 @@
package column
import (
+ "context"
"errors"
"fmt"
- "r3/compatible"
- "r3/db"
"r3/schema"
"r3/schema/caption"
+ "r3/schema/compatible"
"r3/schema/query"
- "r3/tools"
"r3/types"
+ "slices"
"github.com/gofrs/uuid"
"github.com/jackc/pgx/v5"
@@ -18,21 +18,21 @@ import (
var allowedEntities = []string{"api", "collection", "field"}
-func Del_tx(tx pgx.Tx, id uuid.UUID) error {
- _, err := tx.Exec(db.Ctx, `DELETE FROM app.column WHERE id = $1`, id)
+func Del_tx(ctx context.Context, tx pgx.Tx, id uuid.UUID) error {
+ _, err := tx.Exec(ctx, `DELETE FROM app.column WHERE id = $1`, id)
return err
}
-func Get(entity string, entityId uuid.UUID) ([]types.Column, error) {
+func Get_tx(ctx context.Context, tx pgx.Tx, entity string, entityId uuid.UUID) ([]types.Column, error) {
columns := make([]types.Column, 0)
- if !tools.StringInSlice(entity, allowedEntities) {
+ if !slices.Contains(allowedEntities, entity) {
return columns, errors.New("bad entity")
}
- rows, err := db.Pool.Query(db.Ctx, fmt.Sprintf(`
- SELECT id, attribute_id, index, batch, basis, length, wrap, display,
- group_by, aggregator, distincted, sub_query, on_mobile, clipboard
+ rows, err := tx.Query(ctx, fmt.Sprintf(`
+ SELECT id, attribute_id, index, batch, basis, length, display, group_by,
+ aggregator, distincted, hidden, on_mobile, sub_query, styles
FROM app.column
WHERE %s_id = $1
ORDER BY position ASC
@@ -40,22 +40,22 @@ func Get(entity string, entityId uuid.UUID) ([]types.Column, error) {
if err != nil {
return columns, err
}
+ defer rows.Close()
for rows.Next() {
var c types.Column
if err := rows.Scan(&c.Id, &c.AttributeId, &c.Index, &c.Batch, &c.Basis,
- &c.Length, &c.Wrap, &c.Display, &c.GroupBy, &c.Aggregator,
- &c.Distincted, &c.SubQuery, &c.OnMobile, &c.Clipboard); err != nil {
+ &c.Length, &c.Display, &c.GroupBy, &c.Aggregator, &c.Distincted,
+ &c.Hidden, &c.OnMobile, &c.SubQuery, &c.Styles); err != nil {
return columns, err
}
columns = append(columns, c)
}
- rows.Close()
for i, c := range columns {
if c.SubQuery {
- c.Query, err = query.Get("column", c.Id, 0, 0)
+ c.Query, err = query.Get_tx(ctx, tx, "column", c.Id, 0, 0, 0)
if err != nil {
return columns, err
}
@@ -63,8 +63,7 @@ func Get(entity string, entityId uuid.UUID) ([]types.Column, error) {
c.Query.RelationId = pgtype.UUID{}
}
- // get captions
- c.Captions, err = caption.Get("column", c.Id, []string{"columnTitle"})
+ c.Captions, err = caption.Get_tx(ctx, tx, "column", c.Id, []string{"columnTitle"})
if err != nil {
return columns, err
}
@@ -73,9 +72,9 @@ func Get(entity string, entityId uuid.UUID) ([]types.Column, error) {
return columns, nil
}
-func Set_tx(tx pgx.Tx, entity string, entityId uuid.UUID, columns []types.Column) error {
+func Set_tx(ctx context.Context, tx pgx.Tx, entity string, entityId uuid.UUID, columns []types.Column) error {
- if !tools.StringInSlice(entity, allowedEntities) {
+ if !slices.Contains(allowedEntities, entity) {
return errors.New("bad entity")
}
@@ -85,7 +84,7 @@ func Set_tx(tx pgx.Tx, entity string, entityId uuid.UUID, columns []types.Column
idsKeep = append(idsKeep, c.Id)
}
- if _, err := tx.Exec(db.Ctx, fmt.Sprintf(`
+ if _, err := tx.Exec(ctx, fmt.Sprintf(`
DELETE FROM app.column
WHERE %s_id = $1
AND id <> ALL($2)
@@ -96,55 +95,57 @@ func Set_tx(tx pgx.Tx, entity string, entityId uuid.UUID, columns []types.Column
// insert new/update existing columns
for position, c := range columns {
- known, err := schema.CheckCreateId_tx(tx, &c.Id, "column", "id")
+ known, err := schema.CheckCreateId_tx(ctx, tx, &c.Id, "column", "id")
if err != nil {
return err
}
// fix imports < 3.3: Migrate display option to attribute content use
- c.Display, err = compatible.MigrateDisplayToContentUse_tx(tx, c.AttributeId, c.Display)
+ c.Display, err = compatible.MigrateDisplayToContentUse_tx(ctx, tx, c.AttributeId, c.Display)
if err != nil {
return err
}
+ // fix imports < 3.8: Convert to new styles
+ c = compatible.FixColumnStyles(c)
+
if known {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
UPDATE app.column
- SET attribute_id = $1, index = $2, position = $3, batch = $4,
- basis = $5, length = $6, wrap = $7, display = $8,
- group_by = $9, aggregator = $10, distincted = $11,
- sub_query = $12, on_mobile = $13, clipboard = $14
+ SET attribute_id = $1, index = $2, position = $3, batch = $4, basis = $5,
+ length = $6, display = $7, group_by = $8, aggregator = $9, distincted = $10,
+ hidden = $11, on_mobile = $12, sub_query = $13, styles = $14
WHERE id = $15
- `, c.AttributeId, c.Index, position, c.Batch, c.Basis, c.Length,
- c.Wrap, c.Display, c.GroupBy, c.Aggregator, c.Distincted,
- c.SubQuery, c.OnMobile, c.Clipboard, c.Id); err != nil {
+ `, c.AttributeId, c.Index, position, c.Batch, c.Basis, c.Length, c.Display,
+ c.GroupBy, c.Aggregator, c.Distincted, c.Hidden, c.OnMobile, c.SubQuery,
+ c.Styles, c.Id); err != nil {
return err
}
} else {
- if _, err := tx.Exec(db.Ctx, fmt.Sprintf(`
+ if _, err := tx.Exec(ctx, fmt.Sprintf(`
INSERT INTO app.column (
- id, %s_id, attribute_id, index, position, batch, basis,
- length, wrap, display, group_by, aggregator, distincted,
- on_mobile, sub_query, clipboard
+ id, %s_id, attribute_id, index, position, batch, basis, length,
+ display, group_by, aggregator, distincted, hidden, on_mobile,
+ sub_query, styles
)
VALUES ($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13,$14,$15,$16)
`, entity), c.Id, entityId, c.AttributeId, c.Index, position, c.Batch,
- c.Basis, c.Length, c.Wrap, c.Display, c.GroupBy, c.Aggregator,
- c.Distincted, c.OnMobile, c.SubQuery, c.Clipboard); err != nil {
+ c.Basis, c.Length, c.Display, c.GroupBy, c.Aggregator, c.Distincted,
+ c.Hidden, c.OnMobile, c.SubQuery, c.Styles); err != nil {
return err
}
}
if c.SubQuery {
- if err := query.Set_tx(tx, "column", c.Id, 0, 0, c.Query); err != nil {
+ if err := query.Set_tx(ctx, tx, "column", c.Id, 0, 0, 0, c.Query); err != nil {
return err
}
}
// set captions
- if err := caption.Set_tx(tx, c.Id, c.Captions); err != nil {
+ if err := caption.Set_tx(ctx, tx, c.Id, c.Captions); err != nil {
return err
}
}
diff --git a/schema/compatible/compatible.go b/schema/compatible/compatible.go
new file mode 100644
index 00000000..4ceeab82
--- /dev/null
+++ b/schema/compatible/compatible.go
@@ -0,0 +1,297 @@
+/* central package for fixing issues with modules from older versions */
+package compatible
+
+import (
+ "context"
+ "encoding/json"
+ "fmt"
+ "r3/types"
+ "slices"
+ "strings"
+
+ "github.com/gofrs/uuid"
+ "github.com/jackc/pgx/v5"
+ "github.com/jackc/pgx/v5/pgtype"
+)
+
+// < 3.10
+// fix missing menu tab (at least 1 must exist)
+func FixMissingMenuTab(moduleId uuid.UUID, mts []types.MenuTab, menus []types.Menu) ([]types.MenuTab, error) {
+ if len(mts) == 0 {
+ menuTabId, err := uuid.NewV4()
+ if err != nil {
+ return mts, err
+ }
+
+ mts = append(mts, types.MenuTab{
+ Id: menuTabId,
+ ModuleId: moduleId,
+ IconId: pgtype.UUID{},
+ Menus: menus,
+ })
+ }
+ return mts, nil
+}
+func FixCollectionConsumerFlags(c types.CollectionConsumer) types.CollectionConsumer {
+ if c.Flags == nil {
+ c.Flags = make([]string, 0)
+ }
+ if c.MultiValue {
+ c.Flags = append(c.Flags, "multiValue")
+ }
+ if c.NoDisplayEmpty {
+ c.Flags = append(c.Flags, "noDisplayEmpty")
+ }
+ return c
+}
+func FixNilFieldFlags(flags []string) []string {
+ if flags == nil {
+ return make([]string, 0)
+ }
+ return flags
+}
+
+// < 3.9
+// fix missing volatility setting
+func FixMissingVolatility(fnc types.PgFunction) types.PgFunction {
+ if fnc.Volatility == "" {
+ fnc.Volatility = "VOLATILE"
+ }
+ return fnc
+}
+
+// < 3.8
+// migrate column styles
+func FixPresetNull(value pgtype.Text) interface{} {
+ if !value.Valid || value.String == "" {
+ return nil
+ }
+ return value
+}
+func FixColumnStyles(column types.Column) types.Column {
+ if column.Display == "hidden" {
+ column.Hidden = true
+ column.Display = "default"
+ }
+ if column.BatchVertical {
+ column.Styles = append(column.Styles, "vertical")
+ }
+ if column.Clipboard {
+ column.Styles = append(column.Styles, "clipboard")
+ }
+ if column.Wrap {
+ column.Styles = append(column.Styles, "wrap")
+ }
+ return column
+}
+
+// < 3.7
+// migrate PG triggers from relations to module
+func FixPgTriggerLocation(triggers []types.PgTrigger, relations []types.Relation) []types.PgTrigger {
+ for _, relation := range relations {
+ for _, trg := range relation.Triggers {
+ trg.ModuleId = relation.ModuleId
+ triggers = append(triggers, trg)
+ }
+ }
+ return triggers
+}
+
+// < 3.5
+// migrate relation index apply
+func FixOpenFormRelationIndexApply(openForm types.OpenForm) types.OpenForm {
+ if openForm.RelationIndex != -1 {
+ openForm.RelationIndexApply = openForm.RelationIndex
+ }
+ return openForm
+}
+func FixOpenFormRelationIndexApplyDefault(openForm types.OpenForm) types.OpenForm {
+ openForm.RelationIndex = -1
+ return openForm
+}
+
+// migrate default calendar view if not set
+func FixCalendarDefaultView(days int) int {
+ if days == 0 {
+ return 42
+ }
+ return days
+}
+
+// < 3.4
+// migrate open form pop-up type
+func FixOpenFormPopUpType(openForm types.OpenForm) types.OpenForm {
+ if openForm.PopUp && !openForm.PopUpType.Valid {
+ openForm.PopUpType.String = "float"
+ openForm.PopUpType.Valid = true
+ }
+ return openForm
+}
+
+// < 3.4
+// migrate PG index method
+func FixPgIndexMethod(method string) string {
+ if method == "" {
+ return "BTREE"
+ }
+ return method
+}
+
+// < 3.3
+// migrate attribute content use
+func FixAttributeContentUse(contentUse string) string {
+ if contentUse == "" {
+ return "default"
+ }
+ return contentUse
+}
+func MigrateDisplayToContentUse_tx(ctx context.Context, tx pgx.Tx, attributeId uuid.UUID, display string) (string, error) {
+
+ if slices.Contains([]string{"textarea", "richtext", "date", "datetime", "time", "color"}, display) {
+ _, err := tx.Exec(ctx, `
+ UPDATE app.attribute
+ SET content_use = $1
+ WHERE id = $2
+ `, display, attributeId)
+
+ return "default", err
+ }
+ return display, nil
+}
+
+// < 3.2
+// migrate old module/form help pages to help articles
+func FixCaptions_tx(ctx context.Context, tx pgx.Tx, entity string, entityId uuid.UUID, captionMap types.CaptionMap) (types.CaptionMap, error) {
+
+ var articleId uuid.UUID
+ var moduleId uuid.UUID
+ var name string
+
+ switch entity {
+ case "module":
+ moduleId = entityId
+ name = "Migrated from application help"
+ case "form":
+ if err := tx.QueryRow(ctx, `
+ SELECT module_id, CONCAT('Migrated from form help of ', name)
+ FROM app.form
+ WHERE id = $1
+ `, entityId).Scan(&moduleId, &name); err != nil {
+ return captionMap, err
+ }
+ default:
+ return captionMap, fmt.Errorf("invalid entity for help->article migration '%s'", entity)
+ }
+
+ for content, langMap := range captionMap {
+ if content != "moduleHelp" && content != "formHelp" {
+ continue
+ }
+
+ // delete outdated caption entry
+ delete(captionMap, content)
+
+ // check whether there is anything to migrate
+ anyValue := false
+ for _, value := range langMap {
+ if value != "" {
+ anyValue = true
+ break
+ }
+ }
+ if !anyValue {
+ continue
+ }
+
+ // check edge case: installed < 3.2 module gets another < 3.2 update
+ // this would cause duplicates of migration articles
+ // solution: we do not touch migrated articles until a version >= 3.2 is released,
+ // in which module authors can handle/update the migrated articles
+ exists := false
+ if err := tx.QueryRow(ctx, `
+ SELECT EXISTS (
+ SELECT id
+ FROM app.article
+ WHERE module_id = $1
+ AND name = $2
+ )
+ `, moduleId, name).Scan(&exists); err != nil {
+ return captionMap, err
+ }
+ if exists {
+ continue
+ }
+
+ if err := tx.QueryRow(ctx, `
+ INSERT INTO app.article (id, module_id, name)
+ VALUES (gen_random_uuid(), $1, $2)
+ RETURNING id
+ `, moduleId, name).Scan(&articleId); err != nil {
+ return captionMap, err
+ }
+
+ for langCode, value := range langMap {
+ if _, err := tx.Exec(ctx, `
+ INSERT INTO app.caption (article_id, content, language_code, value)
+ VALUES ($1, 'articleBody', $2, $3)
+ `, articleId, langCode, value); err != nil {
+ return captionMap, err
+ }
+ }
+
+ switch content {
+ case "moduleHelp":
+ if _, err := tx.Exec(ctx, `
+ INSERT INTO app.article_help (article_id, module_id, position)
+ VALUES ($1, $2, 0)
+ `, articleId, moduleId); err != nil {
+ return captionMap, err
+ }
+ case "formHelp":
+ if _, err := tx.Exec(ctx, `
+ INSERT INTO app.article_form (article_id, form_id, position)
+ VALUES ($1, $2, 0)
+ `, articleId, entityId); err != nil {
+ return captionMap, err
+ }
+ }
+ }
+ return captionMap, nil
+}
+
+// < 3.1
+// fix legacy file attribute format
+func FixLegacyFileAttributeValue(jsonValue []byte) []types.DataGetValueFile {
+
+ // legacy format
+ var files struct {
+ Files []types.DataGetValueFile `json:"files"`
+ }
+ if err := json.Unmarshal(jsonValue, &files); err == nil && len(files.Files) != 0 {
+ return files.Files
+ }
+
+ // current format
+ var filesNew []types.DataGetValueFile
+ json.Unmarshal(jsonValue, &filesNew)
+ return filesNew
+}
+
+// < 3.0
+// fix missing role content
+func FixMissingRoleContent(role types.Role) types.Role {
+ if role.Content == "" {
+ if role.Name == "everyone" {
+ role.Content = "everyone"
+ } else if strings.Contains(strings.ToLower(role.Name), "admin") {
+ role.Content = "admin"
+ } else if strings.Contains(strings.ToLower(role.Name), "data") {
+ role.Content = "other"
+ } else if strings.Contains(strings.ToLower(role.Name), "csv") {
+ role.Content = "other"
+ } else {
+ role.Content = "user"
+ }
+ }
+ return role
+}
diff --git a/schema/field/field.go b/schema/field/field.go
index bd1976ed..563b9940 100644
--- a/schema/field/field.go
+++ b/schema/field/field.go
@@ -1,21 +1,21 @@
package field
import (
+ "context"
"database/sql"
"encoding/json"
"errors"
"fmt"
- "r3/compatible"
- "r3/db"
"r3/schema"
"r3/schema/caption"
"r3/schema/collection/consumer"
"r3/schema/column"
+ "r3/schema/compatible"
"r3/schema/openForm"
"r3/schema/query"
"r3/schema/tab"
- "r3/tools"
"r3/types"
+ "slices"
"sort"
"github.com/gofrs/uuid"
@@ -23,18 +23,17 @@ import (
"github.com/jackc/pgx/v5/pgtype"
)
-func Del_tx(tx pgx.Tx, id uuid.UUID) error {
- _, err := tx.Exec(db.Ctx, `DELETE FROM app.field WHERE id = $1`, id)
+func Del_tx(ctx context.Context, tx pgx.Tx, id uuid.UUID) error {
+ _, err := tx.Exec(ctx, `DELETE FROM app.field WHERE id = $1`, id)
return err
}
-func Get(formId uuid.UUID) ([]interface{}, error) {
-
+func Get_tx(ctx context.Context, tx pgx.Tx, formId uuid.UUID) ([]interface{}, error) {
fields := make([]interface{}, 0)
- rows, err := db.Pool.Query(db.Ctx, `
+ rows, err := tx.Query(ctx, `
SELECT f.id, f.parent_id, f.tab_id, f.icon_id, f.content, f.state,
- f.on_mobile, a.content,
+ f.flags, f.on_mobile, a.content,
-- button field
fb.js_function_id,
@@ -43,6 +42,7 @@ func Get(formId uuid.UUID) ([]interface{}, error) {
fn.attribute_id_date0, fn.attribute_id_date1, fn.attribute_id_color,
fn.index_date0, fn.index_date1, fn.index_color, fn.ics, fn.gantt,
fn.gantt_steps, fn.gantt_steps_toggle, fn.date_range0, fn.date_range1,
+ fn.days, fn.days_toggle,
-- chart field
fa.chart_option,
@@ -52,7 +52,7 @@ func Get(formId uuid.UUID) ([]interface{}, error) {
fc.wrap, fc.grow, fc.shrink, fc.basis, fc.per_min, fc.per_max,
-- header field
- fh.size,
+ fh.richtext, fh.size,
-- data field
fd.attribute_id, fd.attribute_id_alt, fd.index, fd.display, fd.min,
@@ -66,9 +66,16 @@ func Get(formId uuid.UUID) ([]interface{}, error) {
WHERE field_id = fr.field_id
) AS preset_ids,
+ -- kanban field
+ fk.relation_index_data, fk.relation_index_axis_x,
+ fk.relation_index_axis_y, fk.attribute_id_sort,
+
-- list field
fl.auto_renew, fl.csv_export, fl.csv_import, fl.layout,
- fl.filter_quick, fl.result_limit
+ fl.filter_quick, fl.result_limit,
+
+ -- variable field
+ fv.variable_id, fv.js_function_id, fv.clipboard
FROM app.field AS f
LEFT JOIN app.field_button AS fb ON fb.field_id = f.id
@@ -78,7 +85,9 @@ func Get(formId uuid.UUID) ([]interface{}, error) {
LEFT JOIN app.field_data AS fd ON fd.field_id = f.id
LEFT JOIN app.field_data_relationship AS fr ON fr.field_id = f.id
LEFT JOIN app.field_header AS fh ON fh.field_id = f.id
+ LEFT JOIN app.field_kanban AS fk ON fk.field_id = f.id
LEFT JOIN app.field_list AS fl ON fl.field_id = f.id
+ LEFT JOIN app.field_variable AS fv ON fv.field_id = f.id
LEFT JOIN app.attribute AS a ON a.id = fd.attribute_id
WHERE f.form_id = $1
ORDER BY f.position ASC
@@ -96,9 +105,11 @@ func Get(formId uuid.UUID) ([]interface{}, error) {
posDataLookup := make([]int, 0)
posDataRelLookup := make([]int, 0)
posHeaderLookup := make([]int, 0)
+ posKanbanLookup := make([]int, 0)
posListLookup := make([]int, 0)
posParentLookup := make([]int, 0)
posTabsLookup := make([]int, 0)
+ posVariableLookup := make([]int, 0)
posMapParentId := make(map[int]uuid.UUID)
posMapTabId := make(map[int]uuid.UUID)
@@ -111,28 +122,35 @@ func Get(formId uuid.UUID) ([]interface{}, error) {
var alignItems, alignContent, chartOption, def, direction, display,
ganttSteps, justifyContent, layout, regexCheck pgtype.Text
- var autoSelect, grow, shrink, basis, perMin, perMax, index, indexDate0,
- indexDate1, size, resultLimit pgtype.Int2
+ var autoSelect, days, grow, shrink, basis, perMin, perMax, index,
+ indexDate0, indexDate1, size, relationIndexKanbanData,
+ relationIndexKanbanAxisX, relationIndexKanbanAxisY,
+ resultLimit pgtype.Int2
var autoRenew, dateRange0, dateRange1, indexColor, min, max pgtype.Int4
var attributeId, attributeIdAlt, attributeIdNm, attributeIdDate0,
- attributeIdDate1, attributeIdColor, fieldParentId, iconId,
- jsFunctionIdButton, jsFunctionIdData, tabId pgtype.UUID
- var category, clipboard, csvExport, csvImport, filterQuick,
- filterQuickList, gantt, ganttStepsToggle, ics, outsideIn,
- wrap pgtype.Bool
+ attributeIdDate1, attributeIdColor, attributeIdKanbanSort,
+ fieldParentId, iconId, jsFunctionIdButton, jsFunctionIdData,
+ jsFunctionIdVariable, tabId, variableId pgtype.UUID
+ var category, clipboard, clipboardVariable, csvExport, csvImport,
+ daysToggle, filterQuick, filterQuickList, gantt, ganttStepsToggle,
+ ics, outsideIn, richtext, wrap pgtype.Bool
var defPresetIds []uuid.UUID
+ var flags []string
if err := rows.Scan(&fieldId, &fieldParentId, &tabId, &iconId, &content,
- &state, &onMobile, &atrContent, &jsFunctionIdButton, &attributeIdDate0,
- &attributeIdDate1, &attributeIdColor, &indexDate0, &indexDate1,
- &indexColor, &ics, &gantt, &ganttSteps, &ganttStepsToggle,
- &dateRange0, &dateRange1, &chartOption, &direction, &justifyContent,
- &alignItems, &alignContent, &wrap, &grow, &shrink, &basis, &perMin,
- &perMax, &size, &attributeId, &attributeIdAlt, &index, &display,
- &min, &max, &def, ®exCheck, &jsFunctionIdData, &clipboard,
- &attributeIdNm, &category, &filterQuick, &outsideIn, &autoSelect,
- &defPresetIds, &autoRenew, &csvExport, &csvImport, &layout,
- &filterQuickList, &resultLimit); err != nil {
+ &state, &flags, &onMobile, &atrContent, &jsFunctionIdButton,
+ &attributeIdDate0, &attributeIdDate1, &attributeIdColor, &indexDate0,
+ &indexDate1, &indexColor, &ics, &gantt, &ganttSteps, &ganttStepsToggle,
+ &dateRange0, &dateRange1, &days, &daysToggle, &chartOption,
+ &direction, &justifyContent, &alignItems, &alignContent, &wrap,
+ &grow, &shrink, &basis, &perMin, &perMax, &richtext, &size,
+ &attributeId, &attributeIdAlt, &index, &display, &min, &max, &def,
+ ®exCheck, &jsFunctionIdData, &clipboard, &attributeIdNm,
+ &category, &filterQuick, &outsideIn, &autoSelect, &defPresetIds,
+ &relationIndexKanbanData, &relationIndexKanbanAxisX,
+ &relationIndexKanbanAxisY, &attributeIdKanbanSort, &autoRenew,
+ &csvExport, &csvImport, &layout, &filterQuickList, &resultLimit,
+ &variableId, &jsFunctionIdVariable, &clipboardVariable); err != nil {
rows.Close()
return fields, err
@@ -150,13 +168,10 @@ func Get(formId uuid.UUID) ([]interface{}, error) {
IconId: iconId,
Content: content,
State: state,
+ Flags: flags,
OnMobile: onMobile,
JsFunctionId: jsFunctionIdButton,
OpenForm: types.OpenForm{},
-
- // legacy
- FormIdOpen: pgtype.UUID{},
- AttributeIdRecord: pgtype.UUID{},
})
posButtonLookup = append(posButtonLookup, pos)
case "calendar":
@@ -166,6 +181,7 @@ func Get(formId uuid.UUID) ([]interface{}, error) {
IconId: iconId,
Content: content,
State: state,
+ Flags: flags,
OnMobile: onMobile,
AttributeIdDate0: attributeIdDate0.Bytes,
AttributeIdDate1: attributeIdDate1.Bytes,
@@ -179,13 +195,11 @@ func Get(formId uuid.UUID) ([]interface{}, error) {
GanttStepsToggle: ganttStepsToggle.Bool,
DateRange0: int64(dateRange0.Int32),
DateRange1: int64(dateRange1.Int32),
+ Days: int(days.Int16),
+ DaysToggle: daysToggle.Bool,
Columns: []types.Column{},
Query: types.Query{},
OpenForm: types.OpenForm{},
-
- // legacy
- FormIdOpen: pgtype.UUID{},
- AttributeIdRecord: pgtype.UUID{},
})
posCalendarLookup = append(posCalendarLookup, pos)
case "chart":
@@ -195,10 +209,12 @@ func Get(formId uuid.UUID) ([]interface{}, error) {
IconId: iconId,
Content: content,
State: state,
+ Flags: flags,
OnMobile: onMobile,
ChartOption: chartOption.String,
Columns: []types.Column{},
Query: types.Query{},
+ Captions: types.CaptionMap{},
})
posChartLookup = append(posChartLookup, pos)
case "container":
@@ -208,6 +224,7 @@ func Get(formId uuid.UUID) ([]interface{}, error) {
IconId: iconId,
Content: content,
State: state,
+ Flags: flags,
OnMobile: onMobile,
Direction: direction.String,
JustifyContent: justifyContent.String,
@@ -231,6 +248,7 @@ func Get(formId uuid.UUID) ([]interface{}, error) {
IconId: iconId,
Content: content,
State: state,
+ Flags: flags,
OnMobile: onMobile,
Clipboard: clipboard.Bool,
AttributeId: attributeId.Bytes,
@@ -254,10 +272,8 @@ func Get(formId uuid.UUID) ([]interface{}, error) {
Captions: types.CaptionMap{},
// legacy
- FormIdOpen: pgtype.UUID{},
- AttributeIdRecord: pgtype.UUID{},
- CollectionIdDef: pgtype.UUID{},
- ColumnIdDef: pgtype.UUID{},
+ CollectionIdDef: pgtype.UUID{},
+ ColumnIdDef: pgtype.UUID{},
})
posDataRelLookup = append(posDataRelLookup, pos)
} else {
@@ -267,6 +283,7 @@ func Get(formId uuid.UUID) ([]interface{}, error) {
IconId: iconId,
Content: content,
State: state,
+ Flags: flags,
OnMobile: onMobile,
Clipboard: clipboard.Bool,
AttributeId: attributeId.Bytes,
@@ -294,35 +311,56 @@ func Get(formId uuid.UUID) ([]interface{}, error) {
IconId: iconId,
Content: content,
State: state,
+ Flags: flags,
OnMobile: onMobile,
+ Richtext: richtext.Bool,
Size: int(size.Int16),
Captions: types.CaptionMap{},
})
posHeaderLookup = append(posHeaderLookup, pos)
+ case "kanban":
+ fields = append(fields, types.FieldKanban{
+ Id: fieldId,
+ TabId: tabId,
+ IconId: iconId,
+ Content: content,
+ State: state,
+ Flags: flags,
+ OnMobile: onMobile,
+ RelationIndexData: int(relationIndexKanbanData.Int16),
+ RelationIndexAxisX: int(relationIndexKanbanAxisX.Int16),
+ RelationIndexAxisY: relationIndexKanbanAxisY,
+ AttributeIdSort: attributeIdKanbanSort,
+ OpenForm: types.OpenForm{},
+ Columns: []types.Column{},
+ Query: types.Query{},
+ })
+ posKanbanLookup = append(posKanbanLookup, pos)
+
case "list":
fields = append(fields, types.FieldList{
- Id: fieldId,
- TabId: tabId,
- IconId: iconId,
- Content: content,
- State: state,
- OnMobile: onMobile,
- Columns: []types.Column{},
- AutoRenew: autoRenew,
- CsvExport: csvExport.Bool,
- CsvImport: csvImport.Bool,
- Layout: layout.String,
- FilterQuick: filterQuickList.Bool,
- Query: types.Query{},
- OpenForm: types.OpenForm{},
- ResultLimit: int(resultLimit.Int16),
-
- // legacy
- FormIdOpen: pgtype.UUID{},
- AttributeIdRecord: pgtype.UUID{},
+ Id: fieldId,
+ TabId: tabId,
+ IconId: iconId,
+ Content: content,
+ State: state,
+ Flags: flags,
+ OnMobile: onMobile,
+ Columns: []types.Column{},
+ AutoRenew: autoRenew,
+ CsvExport: csvExport.Bool,
+ CsvImport: csvImport.Bool,
+ Layout: layout.String,
+ FilterQuick: filterQuickList.Bool,
+ Query: types.Query{},
+ Captions: types.CaptionMap{},
+ OpenForm: types.OpenForm{},
+ OpenFormBulk: types.OpenForm{},
+ ResultLimit: int(resultLimit.Int16),
})
posListLookup = append(posListLookup, pos)
+
case "tabs":
fields = append(fields, types.FieldTabs{
Id: fieldId,
@@ -330,11 +368,28 @@ func Get(formId uuid.UUID) ([]interface{}, error) {
IconId: iconId,
Content: content,
State: state,
+ Flags: flags,
OnMobile: onMobile,
+ Captions: types.CaptionMap{},
Tabs: []types.Tab{},
})
posTabsLookup = append(posTabsLookup, pos)
posParentLookup = append(posParentLookup, pos)
+
+ case "variable":
+ fields = append(fields, types.FieldVariable{
+ Id: fieldId,
+ VariableId: variableId,
+ JsFunctionId: jsFunctionIdVariable,
+ IconId: iconId,
+ Content: content,
+ State: state,
+ Flags: flags,
+ OnMobile: onMobile,
+ Clipboard: clipboardVariable.Bool,
+ Captions: types.CaptionMap{},
+ })
+ posVariableLookup = append(posVariableLookup, pos)
}
pos++
}
@@ -344,11 +399,11 @@ func Get(formId uuid.UUID) ([]interface{}, error) {
for _, pos := range posButtonLookup {
var field = fields[pos].(types.FieldButton)
- field.OpenForm, err = openForm.Get("field", field.Id)
+ field.OpenForm, err = openForm.Get_tx(ctx, tx, "field", field.Id, pgtype.Text{})
if err != nil {
return fields, err
}
- field.Captions, err = caption.Get("field", field.Id, []string{"fieldTitle"})
+ field.Captions, err = caption.Get_tx(ctx, tx, "field", field.Id, []string{"fieldTitle"})
if err != nil {
return fields, err
}
@@ -359,19 +414,19 @@ func Get(formId uuid.UUID) ([]interface{}, error) {
for _, pos := range posCalendarLookup {
var field = fields[pos].(types.FieldCalendar)
- field.OpenForm, err = openForm.Get("field", field.Id)
+ field.OpenForm, err = openForm.Get_tx(ctx, tx, "field", field.Id, pgtype.Text{})
if err != nil {
return fields, err
}
- field.Query, err = query.Get("field", field.Id, 0, 0)
+ field.Query, err = query.Get_tx(ctx, tx, "field", field.Id, 0, 0, 0)
if err != nil {
return fields, err
}
- field.Columns, err = column.Get("field", field.Id)
+ field.Columns, err = column.Get_tx(ctx, tx, "field", field.Id)
if err != nil {
return fields, err
}
- field.Collections, err = consumer.Get("field", field.Id, "fieldFilterSelector")
+ field.Collections, err = consumer.Get_tx(ctx, tx, "field", field.Id, "fieldFilterSelector")
if err != nil {
return fields, err
}
@@ -382,11 +437,15 @@ func Get(formId uuid.UUID) ([]interface{}, error) {
for _, pos := range posChartLookup {
var field = fields[pos].(types.FieldChart)
- field.Query, err = query.Get("field", field.Id, 0, 0)
+ field.Query, err = query.Get_tx(ctx, tx, "field", field.Id, 0, 0, 0)
+ if err != nil {
+ return fields, err
+ }
+ field.Columns, err = column.Get_tx(ctx, tx, "field", field.Id)
if err != nil {
return fields, err
}
- field.Columns, err = column.Get("field", field.Id)
+ field.Captions, err = caption.Get_tx(ctx, tx, "field", field.Id, []string{"fieldTitle"})
if err != nil {
return fields, err
}
@@ -397,11 +456,11 @@ func Get(formId uuid.UUID) ([]interface{}, error) {
for _, pos := range posDataLookup {
var field = fields[pos].(types.FieldData)
- field.DefCollection, err = consumer.GetOne("field", field.Id, "fieldDataDefault")
+ field.DefCollection, err = consumer.GetOne_tx(ctx, tx, "field", field.Id, "fieldDataDefault")
if err != nil {
return fields, err
}
- field.Captions, err = caption.Get("field", field.Id, []string{"fieldTitle", "fieldHelp"})
+ field.Captions, err = caption.Get_tx(ctx, tx, "field", field.Id, []string{"fieldTitle", "fieldHelp"})
if err != nil {
return fields, err
}
@@ -412,23 +471,23 @@ func Get(formId uuid.UUID) ([]interface{}, error) {
for _, pos := range posDataRelLookup {
var field = fields[pos].(types.FieldDataRelationship)
- field.OpenForm, err = openForm.Get("field", field.Id)
+ field.OpenForm, err = openForm.Get_tx(ctx, tx, "field", field.Id, pgtype.Text{})
if err != nil {
return fields, err
}
- field.Query, err = query.Get("field", field.Id, 0, 0)
+ field.Query, err = query.Get_tx(ctx, tx, "field", field.Id, 0, 0, 0)
if err != nil {
return fields, err
}
- field.Columns, err = column.Get("field", field.Id)
+ field.Columns, err = column.Get_tx(ctx, tx, "field", field.Id)
if err != nil {
return fields, err
}
- field.DefCollection, err = consumer.GetOne("field", field.Id, "fieldDataDefault")
+ field.DefCollection, err = consumer.GetOne_tx(ctx, tx, "field", field.Id, "fieldDataDefault")
if err != nil {
return fields, err
}
- field.Captions, err = caption.Get("field", field.Id, []string{"fieldTitle", "fieldHelp"})
+ field.Captions, err = caption.Get_tx(ctx, tx, "field", field.Id, []string{"fieldTitle", "fieldHelp"})
if err != nil {
return fields, err
}
@@ -439,7 +498,30 @@ func Get(formId uuid.UUID) ([]interface{}, error) {
for _, pos := range posHeaderLookup {
var field = fields[pos].(types.FieldHeader)
- field.Captions, err = caption.Get("field", field.Id, []string{"fieldTitle"})
+ field.Captions, err = caption.Get_tx(ctx, tx, "field", field.Id, []string{"fieldTitle"})
+ if err != nil {
+ return fields, err
+ }
+ fields[pos] = field
+ }
+
+ // lookup kanban fields: open form, query, columns, consumed collections
+ for _, pos := range posKanbanLookup {
+ var field = fields[pos].(types.FieldKanban)
+
+ field.OpenForm, err = openForm.Get_tx(ctx, tx, "field", field.Id, pgtype.Text{})
+ if err != nil {
+ return fields, err
+ }
+ field.Query, err = query.Get_tx(ctx, tx, "field", field.Id, 0, 0, 0)
+ if err != nil {
+ return fields, err
+ }
+ field.Columns, err = column.Get_tx(ctx, tx, "field", field.Id)
+ if err != nil {
+ return fields, err
+ }
+ field.Collections, err = consumer.Get_tx(ctx, tx, "field", field.Id, "fieldFilterSelector")
if err != nil {
return fields, err
}
@@ -450,19 +532,27 @@ func Get(formId uuid.UUID) ([]interface{}, error) {
for _, pos := range posListLookup {
var field = fields[pos].(types.FieldList)
- field.OpenForm, err = openForm.Get("field", field.Id)
+ field.OpenForm, err = openForm.Get_tx(ctx, tx, "field", field.Id, pgtype.Text{})
+ if err != nil {
+ return fields, err
+ }
+ field.OpenFormBulk, err = openForm.Get_tx(ctx, tx, "field", field.Id, pgtype.Text{String: "bulk", Valid: true})
if err != nil {
return fields, err
}
- field.Query, err = query.Get("field", field.Id, 0, 0)
+ field.Captions, err = caption.Get_tx(ctx, tx, "field", field.Id, []string{"fieldTitle"})
if err != nil {
return fields, err
}
- field.Columns, err = column.Get("field", field.Id)
+ field.Query, err = query.Get_tx(ctx, tx, "field", field.Id, 0, 0, 0)
if err != nil {
return fields, err
}
- field.Collections, err = consumer.Get("field", field.Id, "fieldFilterSelector")
+ field.Columns, err = column.Get_tx(ctx, tx, "field", field.Id)
+ if err != nil {
+ return fields, err
+ }
+ field.Collections, err = consumer.Get_tx(ctx, tx, "field", field.Id, "fieldFilterSelector")
if err != nil {
return fields, err
}
@@ -472,14 +562,37 @@ func Get(formId uuid.UUID) ([]interface{}, error) {
// lookup tabs fields: get tabs
for _, pos := range posTabsLookup {
var field = fields[pos].(types.FieldTabs)
- field.Tabs, err = tab.Get("field", field.Id)
+ field.Captions, err = caption.Get_tx(ctx, tx, "field", field.Id, []string{"fieldTitle"})
+ if err != nil {
+ return fields, err
+ }
+ field.Tabs, err = tab.Get_tx(ctx, tx, "field", field.Id)
+ if err != nil {
+ return fields, err
+ }
+ fields[pos] = field
+ }
+
+ // lookup variable fields: open form, query, columns, captions
+ for _, pos := range posVariableLookup {
+ var field = fields[pos].(types.FieldVariable)
+
+ field.Query, err = query.Get_tx(ctx, tx, "field", field.Id, 0, 0, 0)
+ if err != nil {
+ return fields, err
+ }
+ field.Columns, err = column.Get_tx(ctx, tx, "field", field.Id)
+ if err != nil {
+ return fields, err
+ }
+ field.Captions, err = caption.Get_tx(ctx, tx, "field", field.Id, []string{"fieldTitle", "fieldHelp"})
if err != nil {
return fields, err
}
fields[pos] = field
}
- // get sorted keys for field positions with parent Id
+ // get sorted keys for field positions with parent ID
orderedPos := make([]int, 0, len(posMapParentId))
for k := range posMapParentId {
orderedPos = append(orderedPos, k)
@@ -497,13 +610,13 @@ func Get(formId uuid.UUID) ([]interface{}, error) {
}
// no parent field
- if !tools.IntInSlice(pos, posParentLookup) {
+ if !slices.Contains(posParentLookup, pos) {
children = append(children, fields[pos])
continue
}
// tabs field
- if tools.IntInSlice(pos, posTabsLookup) {
+ if slices.Contains(posTabsLookup, pos) {
field := fields[pos].(types.FieldTabs)
for i, tab := range field.Tabs {
@@ -525,45 +638,45 @@ func Get(formId uuid.UUID) ([]interface{}, error) {
// recursively resolve all fields with their children
return getChildren(uuid.Nil, uuid.Nil), nil
}
-func GetCalendar(fieldId uuid.UUID) (types.FieldCalendar, error) {
+func GetCalendar_tx(ctx context.Context, tx pgx.Tx, fieldId uuid.UUID) (types.FieldCalendar, error) {
var f types.FieldCalendar
f.Id = fieldId
- err := db.Pool.QueryRow(db.Ctx, `
+ err := tx.QueryRow(ctx, `
SELECT attribute_id_date0, attribute_id_date1, index_date0, index_date1,
- date_range0, date_range1
+ date_range0, date_range1, days, days_toggle
FROM app.field_calendar
WHERE ics
AND gantt = FALSE
AND field_id = $1
`, fieldId).Scan(&f.AttributeIdDate0, &f.AttributeIdDate1, &f.IndexDate0,
- &f.IndexDate1, &f.DateRange0, &f.DateRange1)
+ &f.IndexDate1, &f.DateRange0, &f.DateRange1, &f.Days, &f.DaysToggle)
if err != nil {
return f, err
}
- f.OpenForm, err = openForm.Get("field", f.Id)
+ f.OpenForm, err = openForm.Get_tx(ctx, tx, "field", f.Id, pgtype.Text{})
if err != nil {
return f, err
}
- f.Query, err = query.Get("field", f.Id, 0, 0)
+ f.Query, err = query.Get_tx(ctx, tx, "field", f.Id, 0, 0, 0)
if err != nil {
return f, err
}
- f.Columns, err = column.Get("field", f.Id)
+ f.Columns, err = column.Get_tx(ctx, tx, "field", f.Id)
if err != nil {
return f, err
}
- f.Collections, err = consumer.Get("field", f.Id, "fieldFilterSelector")
+ f.Collections, err = consumer.Get_tx(ctx, tx, "field", f.Id, "fieldFilterSelector")
if err != nil {
return f, err
}
return f, nil
}
-func Set_tx(tx pgx.Tx, formId uuid.UUID, parentId pgtype.UUID, tabId pgtype.UUID,
+func Set_tx(ctx context.Context, tx pgx.Tx, formId uuid.UUID, parentId pgtype.UUID, tabId pgtype.UUID,
fields []interface{}, fieldIdMapQuery map[uuid.UUID]types.Query) error {
for pos, fieldIf := range fields {
@@ -577,9 +690,11 @@ func Set_tx(tx pgx.Tx, formId uuid.UUID, parentId pgtype.UUID, tabId pgtype.UUID
if err := json.Unmarshal(fieldJson, &f); err != nil {
return err
}
- fieldId, err := setGeneric_tx(tx, formId, f.Id, parentId, tabId,
- f.IconId, f.Content, f.State, f.OnMobile, pos)
+ // fix imports < 3.10: New field flags
+ f.Flags = compatible.FixNilFieldFlags(f.Flags)
+
+ fieldId, err := setGeneric_tx(ctx, tx, formId, parentId, tabId, f, pos)
if err != nil {
return err
}
@@ -590,12 +705,10 @@ func Set_tx(tx pgx.Tx, formId uuid.UUID, parentId pgtype.UUID, tabId pgtype.UUID
if err := json.Unmarshal(fieldJson, &f); err != nil {
return err
}
- if err := setButton_tx(tx, fieldId, f.AttributeIdRecord,
- f.FormIdOpen, f.OpenForm, f.JsFunctionId); err != nil {
-
+ if err := setButton_tx(ctx, tx, fieldId, f); err != nil {
return err
}
- if err := caption.Set_tx(tx, fieldId, f.Captions); err != nil {
+ if err := caption.Set_tx(ctx, tx, fieldId, f.Captions); err != nil {
return err
}
case "calendar":
@@ -603,12 +716,7 @@ func Set_tx(tx pgx.Tx, formId uuid.UUID, parentId pgtype.UUID, tabId pgtype.UUID
if err := json.Unmarshal(fieldJson, &f); err != nil {
return err
}
- if err := setCalendar_tx(tx, fieldId, f.FormIdOpen,
- f.AttributeIdDate0, f.AttributeIdDate1, f.AttributeIdColor,
- f.AttributeIdRecord, f.IndexDate0, f.IndexDate1, f.IndexColor,
- f.Gantt, f.GanttSteps, f.GanttStepsToggle, f.Ics, f.DateRange0,
- f.DateRange1, f.Columns, f.Collections, f.OpenForm); err != nil {
-
+ if err := setCalendar_tx(ctx, tx, fieldId, f); err != nil {
return err
}
fieldIdMapQuery[fieldId] = f.Query
@@ -618,7 +726,10 @@ func Set_tx(tx pgx.Tx, formId uuid.UUID, parentId pgtype.UUID, tabId pgtype.UUID
if err := json.Unmarshal(fieldJson, &f); err != nil {
return err
}
- if err := setChart_tx(tx, fieldId, f.ChartOption, f.Columns); err != nil {
+ if err := setChart_tx(ctx, tx, fieldId, f); err != nil {
+ return err
+ }
+ if err := caption.Set_tx(ctx, tx, fieldId, f.Captions); err != nil {
return err
}
fieldIdMapQuery[fieldId] = f.Query
@@ -628,15 +739,12 @@ func Set_tx(tx pgx.Tx, formId uuid.UUID, parentId pgtype.UUID, tabId pgtype.UUID
if err := json.Unmarshal(fieldJson, &f); err != nil {
return err
}
- if err := setContainer_tx(tx, fieldId, f.Direction, f.JustifyContent,
- f.AlignItems, f.AlignContent, f.Wrap, f.Grow, f.Shrink, f.Basis,
- f.PerMin, f.PerMax); err != nil {
-
+ if err := setContainer_tx(ctx, tx, fieldId, f); err != nil {
return err
}
// update container children
- if err := Set_tx(tx, formId, pgtype.UUID{Bytes: fieldId, Valid: true},
+ if err := Set_tx(ctx, tx, formId, pgtype.UUID{Bytes: fieldId, Valid: true},
pgtype.UUID{}, f.Fields, fieldIdMapQuery); err != nil {
return err
@@ -647,13 +755,10 @@ func Set_tx(tx pgx.Tx, formId uuid.UUID, parentId pgtype.UUID, tabId pgtype.UUID
if err := json.Unmarshal(fieldJson, &f); err != nil {
return err
}
- if err := setData_tx(tx, fieldId, f.AttributeId, f.AttributeIdAlt,
- f.Index, f.Def, f.Display, f.Min, f.Max, f.RegexCheck, f.JsFunctionId,
- f.Clipboard, f.DefCollection, f.CollectionIdDef, f.ColumnIdDef); err != nil {
-
+ if err := setData_tx(ctx, tx, fieldId, f); err != nil {
return err
}
- if err := caption.Set_tx(tx, fieldId, f.Captions); err != nil {
+ if err := caption.Set_tx(ctx, tx, fieldId, f.Captions); err != nil {
return err
}
@@ -668,11 +773,7 @@ func Set_tx(tx pgx.Tx, formId uuid.UUID, parentId pgtype.UUID, tabId pgtype.UUID
if err := json.Unmarshal(fieldJson, &f); err != nil {
return err
}
- if err := setDataRelationship_tx(tx, fieldId, f.FormIdOpen,
- f.AttributeIdRecord, f.AttributeIdNm, f.Columns, f.Category,
- f.FilterQuick, f.OutsideIn, f.AutoSelect, f.DefPresetIds,
- f.OpenForm); err != nil {
-
+ if err := setDataRelationship_tx(ctx, tx, fieldId, f); err != nil {
return err
}
fieldIdMapQuery[fieldId] = f.Query
@@ -683,22 +784,32 @@ func Set_tx(tx pgx.Tx, formId uuid.UUID, parentId pgtype.UUID, tabId pgtype.UUID
if err := json.Unmarshal(fieldJson, &f); err != nil {
return err
}
- if err := setHeader_tx(tx, fieldId, f.Size); err != nil {
+ if err := setHeader_tx(ctx, tx, fieldId, f); err != nil {
return err
}
- if err := caption.Set_tx(tx, fieldId, f.Captions); err != nil {
+ if err := caption.Set_tx(ctx, tx, fieldId, f.Captions); err != nil {
return err
}
+ case "kanban":
+ var f types.FieldKanban
+ if err := json.Unmarshal(fieldJson, &f); err != nil {
+ return err
+ }
+ if err := setKanban_tx(ctx, tx, fieldId, f); err != nil {
+ return err
+ }
+ fieldIdMapQuery[fieldId] = f.Query
+
case "list":
var f types.FieldList
if err := json.Unmarshal(fieldJson, &f); err != nil {
return err
}
- if err := setList_tx(tx, fieldId, f.AttributeIdRecord, f.FormIdOpen,
- f.AutoRenew, f.CsvExport, f.CsvImport, f.Layout, f.FilterQuick,
- f.ResultLimit, f.Columns, f.Collections, f.OpenForm); err != nil {
-
+ if err := setList_tx(ctx, tx, fieldId, f); err != nil {
+ return err
+ }
+ if err := caption.Set_tx(ctx, tx, fieldId, f.Captions); err != nil {
return err
}
fieldIdMapQuery[fieldId] = f.Query
@@ -715,12 +826,12 @@ func Set_tx(tx pgx.Tx, formId uuid.UUID, parentId pgtype.UUID, tabId pgtype.UUID
// insert/update/delete tabs
idsKeep := make([]uuid.UUID, 0)
for i, t := range f.Tabs {
- t.Id, err = tab.Set_tx(tx, "field", fieldId, i, t)
+ t.Id, err = tab.Set_tx(ctx, tx, "field", fieldId, i, t)
if err != nil {
return err
}
- if err := Set_tx(tx, formId,
+ if err := Set_tx(ctx, tx, formId,
pgtype.UUID{Bytes: fieldId, Valid: true},
pgtype.UUID{Bytes: t.Id, Valid: true},
t.Fields, fieldIdMapQuery); err != nil {
@@ -729,13 +840,29 @@ func Set_tx(tx pgx.Tx, formId uuid.UUID, parentId pgtype.UUID, tabId pgtype.UUID
}
idsKeep = append(idsKeep, t.Id)
}
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
DELETE FROM app.tab
WHERE field_id = $1
AND id <> ALL($2)
`, f.Id, idsKeep); err != nil {
return err
}
+ if err := caption.Set_tx(ctx, tx, fieldId, f.Captions); err != nil {
+ return err
+ }
+
+ case "variable":
+ var f types.FieldVariable
+ if err := json.Unmarshal(fieldJson, &f); err != nil {
+ return err
+ }
+ if err := setVariable_tx(ctx, tx, fieldId, f); err != nil {
+ return err
+ }
+ if err := caption.Set_tx(ctx, tx, fieldId, f.Captions); err != nil {
+ return err
+ }
+ fieldIdMapQuery[fieldId] = f.Query
default:
return errors.New("unknown field content")
@@ -744,291 +871,269 @@ func Set_tx(tx pgx.Tx, formId uuid.UUID, parentId pgtype.UUID, tabId pgtype.UUID
return nil
}
-func setGeneric_tx(tx pgx.Tx, formId uuid.UUID, id uuid.UUID,
- parentId pgtype.UUID, tabId pgtype.UUID, iconId pgtype.UUID, content string,
- state string, onMobile bool, position int) (uuid.UUID, error) {
+func setGeneric_tx(ctx context.Context, tx pgx.Tx, formId uuid.UUID, parentId pgtype.UUID,
+ tabId pgtype.UUID, f types.Field, position int) (uuid.UUID, error) {
- known, err := schema.CheckCreateId_tx(tx, &id, "field", "id")
+ known, err := schema.CheckCreateId_tx(ctx, tx, &f.Id, "field", "id")
if err != nil {
- return id, err
+ return f.Id, err
}
if known {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
UPDATE app.field
SET parent_id = $1, tab_id = $2, icon_id = $3, state = $4,
- on_mobile = $5, position = $6
- WHERE id = $7
- `, parentId, tabId, iconId, state, onMobile, position, id); err != nil {
- return id, err
+ flags = $5, on_mobile = $6, position = $7
+ WHERE id = $8
+ `, parentId, tabId, f.IconId, f.State, f.Flags, f.OnMobile, position, f.Id); err != nil {
+ return f.Id, err
}
} else {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
INSERT INTO app.field (id, form_id, parent_id, tab_id,
- icon_id, content, state, on_mobile, position)
- VALUES ($1,$2,$3,$4,$5,$6,$7,$8,$9)
- `, id, formId, parentId, tabId, iconId, content, state, onMobile, position); err != nil {
- return id, err
+ icon_id, content, state, flags, on_mobile, position)
+ VALUES ($1,$2,$3,$4,$5,$6,$7,$8,$9,$10)
+ `, f.Id, formId, parentId, tabId, f.IconId, f.Content, f.State, f.Flags, f.OnMobile, position); err != nil {
+ return f.Id, err
}
}
- return id, nil
+ return f.Id, nil
}
-func setButton_tx(tx pgx.Tx, fieldId uuid.UUID, attributeIdRecord pgtype.UUID,
- formIdOpen pgtype.UUID, oForm types.OpenForm, jsFunctionId pgtype.UUID) error {
+func setButton_tx(ctx context.Context, tx pgx.Tx, fieldId uuid.UUID, f types.FieldButton) error {
- known, err := schema.CheckCreateId_tx(tx, &fieldId, "field_button", "field_id")
+ known, err := schema.CheckCreateId_tx(ctx, tx, &fieldId, "field_button", "field_id")
if err != nil {
return err
}
- // fix imports < 2.6: New open form entity
- oForm = compatible.FixMissingOpenForm(formIdOpen, attributeIdRecord, oForm)
-
if known {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
UPDATE app.field_button
SET js_function_id = $1
WHERE field_id = $2
- `, jsFunctionId, fieldId); err != nil {
+ `, f.JsFunctionId, fieldId); err != nil {
return err
}
} else {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
INSERT INTO app.field_button (field_id, js_function_id)
VALUES ($1,$2)
- `, fieldId, jsFunctionId); err != nil {
+ `, fieldId, f.JsFunctionId); err != nil {
return err
}
}
// set open form
- return openForm.Set_tx(tx, "field", fieldId, oForm)
+ return openForm.Set_tx(ctx, tx, "field", fieldId, f.OpenForm, pgtype.Text{})
}
-func setCalendar_tx(tx pgx.Tx, fieldId uuid.UUID, formIdOpen pgtype.UUID,
- attributeIdDate0 uuid.UUID, attributeIdDate1 uuid.UUID,
- attributeIdColor pgtype.UUID, attributeIdRecord pgtype.UUID, indexDate0 int,
- indexDate1 int, indexColor pgtype.Int4, gantt bool, ganttSteps pgtype.Text,
- ganttStepsToggle bool, ics bool, dateRange0 int64, dateRange1 int64,
- columns []types.Column, collections []types.CollectionConsumer,
- oForm types.OpenForm) error {
-
- known, err := schema.CheckCreateId_tx(tx, &fieldId, "field_calendar", "field_id")
+func setCalendar_tx(ctx context.Context, tx pgx.Tx, fieldId uuid.UUID, f types.FieldCalendar) error {
+
+ known, err := schema.CheckCreateId_tx(ctx, tx, &fieldId, "field_calendar", "field_id")
if err != nil {
return err
}
+ // fix imports < 3.5: Default view
+ f.Days = compatible.FixCalendarDefaultView(f.Days)
+
if known {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
UPDATE app.field_calendar
SET attribute_id_date0 = $1, attribute_id_date1 = $2,
attribute_id_color = $3, index_date0 = $4, index_date1 = $5,
index_color = $6, gantt = $7, gantt_steps = $8,
gantt_steps_toggle = $9, ics = $10, date_range0 = $11,
- date_range1 = $12
- WHERE field_id = $13
- `, attributeIdDate0, attributeIdDate1, attributeIdColor, indexDate0,
- indexDate1, indexColor, gantt, ganttSteps, ganttStepsToggle, ics,
- dateRange0, dateRange1, fieldId); err != nil {
+ date_range1 = $12, days = $13, days_toggle = $14
+ WHERE field_id = $15
+ `, f.AttributeIdDate0, f.AttributeIdDate1, f.AttributeIdColor,
+ f.IndexDate0, f.IndexDate1, f.IndexColor, f.Gantt, f.GanttSteps,
+ f.GanttStepsToggle, f.Ics, f.DateRange0, f.DateRange1, f.Days,
+ f.DaysToggle, fieldId); err != nil {
return err
}
} else {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
INSERT INTO app.field_calendar (
field_id, attribute_id_date0, attribute_id_date1,
attribute_id_color, index_date0, index_date1, index_color,
gantt, gantt_steps, gantt_steps_toggle, ics, date_range0,
- date_range1
- ) VALUES ($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13)
- `, fieldId, attributeIdDate0, attributeIdDate1, attributeIdColor,
- indexDate0, indexDate1, indexColor, gantt, ganttSteps,
- ganttStepsToggle, ics, dateRange0, dateRange1); err != nil {
+ date_range1, days, days_toggle
+ ) VALUES ($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13,$14,$15)
+ `, fieldId, f.AttributeIdDate0, f.AttributeIdDate1, f.AttributeIdColor,
+ f.IndexDate0, f.IndexDate1, f.IndexColor, f.Gantt, f.GanttSteps,
+ f.GanttStepsToggle, f.Ics, f.DateRange0, f.DateRange1, f.Days,
+ f.DaysToggle); err != nil {
return err
}
}
- // fix imports < 2.6: New open form entity
- oForm = compatible.FixMissingOpenForm(formIdOpen, attributeIdRecord, oForm)
-
// set open form
- if err := openForm.Set_tx(tx, "field", fieldId, oForm); err != nil {
+ if err := openForm.Set_tx(ctx, tx, "field", fieldId, f.OpenForm, pgtype.Text{}); err != nil {
return err
}
// set collection consumer
- if err := consumer.Set_tx(tx, "field", fieldId, "fieldFilterSelector", collections); err != nil {
+ if err := consumer.Set_tx(ctx, tx, "field", fieldId, "fieldFilterSelector", f.Collections); err != nil {
return err
}
// set columns
- return column.Set_tx(tx, "field", fieldId, columns)
+ return column.Set_tx(ctx, tx, "field", fieldId, f.Columns)
}
-func setChart_tx(tx pgx.Tx, fieldId uuid.UUID, chartOption string, columns []types.Column) error {
+func setChart_tx(ctx context.Context, tx pgx.Tx, fieldId uuid.UUID, f types.FieldChart) error {
- known, err := schema.CheckCreateId_tx(tx, &fieldId, "field_chart", "field_id")
+ known, err := schema.CheckCreateId_tx(ctx, tx, &fieldId, "field_chart", "field_id")
if err != nil {
return err
}
if known {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
UPDATE app.field_chart
SET chart_option = $1
WHERE field_id = $2
- `, chartOption, fieldId); err != nil {
+ `, f.ChartOption, fieldId); err != nil {
return err
}
} else {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
INSERT INTO app.field_chart (field_id, chart_option)
VALUES ($1,$2)
- `, fieldId, chartOption); err != nil {
+ `, fieldId, f.ChartOption); err != nil {
return err
}
}
- return column.Set_tx(tx, "field", fieldId, columns)
+ return column.Set_tx(ctx, tx, "field", fieldId, f.Columns)
}
-func setContainer_tx(tx pgx.Tx, fieldId uuid.UUID, direction string,
- justifyContent string, alignItems string, alignContent string, wrap bool,
- grow int, shrink int, basis int, perMin int, perMax int) error {
+func setContainer_tx(ctx context.Context, tx pgx.Tx, fieldId uuid.UUID, f types.FieldContainer) error {
- known, err := schema.CheckCreateId_tx(tx, &fieldId, "field_container", "field_id")
+ known, err := schema.CheckCreateId_tx(ctx, tx, &fieldId, "field_container", "field_id")
if err != nil {
return err
}
if known {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
UPDATE app.field_container
SET direction = $1, justify_content = $2, align_items = $3,
align_content = $4, wrap = $5, grow = $6, shrink = $7, basis = $8,
per_min = $9, per_max = $10
WHERE field_id = $11
- `, direction, justifyContent, alignItems, alignContent, wrap, grow, shrink,
- basis, perMin, perMax, fieldId); err != nil {
+ `, f.Direction, f.JustifyContent, f.AlignItems, f.AlignContent, f.Wrap, f.Grow, f.Shrink,
+ f.Basis, f.PerMin, f.PerMax, fieldId); err != nil {
return err
}
} else {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
INSERT INTO app.field_container (
field_id, direction, justify_content, align_items,
align_content, wrap, grow, shrink, basis, per_min, per_max
)
VALUES ($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11)
- `, fieldId, direction, justifyContent, alignItems, alignContent, wrap,
- grow, shrink, basis, perMin, perMax); err != nil {
+ `, fieldId, f.Direction, f.JustifyContent, f.AlignItems, f.AlignContent, f.Wrap,
+ f.Grow, f.Shrink, f.Basis, f.PerMin, f.PerMax); err != nil {
return err
}
}
return nil
}
-func setData_tx(tx pgx.Tx, fieldId uuid.UUID, attributeId uuid.UUID,
- attributeIdAlt pgtype.UUID, index int, def string, display string,
- min pgtype.Int4, max pgtype.Int4, regexCheck pgtype.Text,
- jsFunctionId pgtype.UUID, clipboard bool, defCollection types.CollectionConsumer,
- collectionIdDef pgtype.UUID, columnIdDef pgtype.UUID) error {
+func setData_tx(ctx context.Context, tx pgx.Tx, fieldId uuid.UUID, f types.FieldData) error {
- known, err := schema.CheckCreateId_tx(tx, &fieldId, "field_data", "field_id")
+ known, err := schema.CheckCreateId_tx(ctx, tx, &fieldId, "field_data", "field_id")
if err != nil {
return err
}
// fix imports < 3.0: Migrate legacy definitions
- if collectionIdDef.Valid {
- defCollection.CollectionId = collectionIdDef.Bytes
- defCollection.ColumnIdDisplay = columnIdDef
- defCollection.MultiValue = false
+ if f.CollectionIdDef.Valid {
+ f.DefCollection.CollectionId = f.CollectionIdDef.Bytes
+ f.DefCollection.ColumnIdDisplay = f.ColumnIdDef
+ f.DefCollection.Flags = make([]string, 0)
}
// fix imports < 3.3: Migrate display option to attribute content use
- display, err = compatible.MigrateDisplayToContentUse_tx(tx, attributeId, display)
+ f.Display, err = compatible.MigrateDisplayToContentUse_tx(ctx, tx, f.AttributeId, f.Display)
if err != nil {
return err
}
- if attributeIdAlt.Valid {
- _, err = compatible.MigrateDisplayToContentUse_tx(tx, attributeIdAlt.Bytes, display)
+ if f.AttributeIdAlt.Valid {
+ _, err = compatible.MigrateDisplayToContentUse_tx(ctx, tx, f.AttributeIdAlt.Bytes, f.Display)
if err != nil {
return err
}
}
if known {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
UPDATE app.field_data
SET attribute_id = $1, attribute_id_alt = $2, index = $3,
def = $4, display = $5,min = $6, max = $7, regex_check = $8,
js_function_id = $9, clipboard = $10
WHERE field_id = $11
- `, attributeId, attributeIdAlt, index, def, display, min, max,
- regexCheck, jsFunctionId, clipboard, fieldId); err != nil {
+ `, f.AttributeId, f.AttributeIdAlt, f.Index, f.Def, f.Display, f.Min, f.Max,
+ f.RegexCheck, f.JsFunctionId, f.Clipboard, fieldId); err != nil {
return err
}
} else {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
INSERT INTO app.field_data (
field_id, attribute_id, attribute_id_alt, index, def, display,
min, max, regex_check, js_function_id, clipboard
)
VALUES ($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11)
- `, fieldId, attributeId, attributeIdAlt, index, def,
- display, min, max, regexCheck, jsFunctionId, clipboard); err != nil {
+ `, fieldId, f.AttributeId, f.AttributeIdAlt, f.Index, f.Def,
+ f.Display, f.Min, f.Max, f.RegexCheck, f.JsFunctionId, f.Clipboard); err != nil {
return err
}
}
// set collection consumer
- return consumer.Set_tx(tx, "field", fieldId, "fieldDataDefault",
- []types.CollectionConsumer{defCollection})
+ return consumer.Set_tx(ctx, tx, "field", fieldId, "fieldDataDefault",
+ []types.CollectionConsumer{f.DefCollection})
}
-func setDataRelationship_tx(tx pgx.Tx, fieldId uuid.UUID, formIdOpen pgtype.UUID,
- attributeIdRecord pgtype.UUID, attributeIdNm pgtype.UUID,
- columns []types.Column, category bool, filterQuick bool, outsideIn bool,
- autoSelect int, defPresetIds []uuid.UUID, oForm types.OpenForm) error {
+func setDataRelationship_tx(ctx context.Context, tx pgx.Tx, fieldId uuid.UUID, f types.FieldDataRelationship) error {
- known, err := schema.CheckCreateId_tx(tx, &fieldId, "field_data_relationship", "field_id")
+ known, err := schema.CheckCreateId_tx(ctx, tx, &fieldId, "field_data_relationship", "field_id")
if err != nil {
return err
}
if known {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
UPDATE app.field_data_relationship
SET attribute_id_nm = $1, category = $2, filter_quick = $3,
outside_in = $4, auto_select = $5
WHERE field_id = $6
- `, attributeIdNm, category, filterQuick,
- outsideIn, autoSelect, fieldId); err != nil {
-
+ `, f.AttributeIdNm, f.Category, f.FilterQuick, f.OutsideIn, f.AutoSelect, fieldId); err != nil {
return err
}
} else {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
INSERT INTO app.field_data_relationship (
field_id, attribute_id_nm, category,
filter_quick, outside_in, auto_select
) VALUES ($1,$2,$3,$4,$5,$6)
- `, fieldId, attributeIdNm, category, filterQuick,
- outsideIn, autoSelect); err != nil {
-
+ `, fieldId, f.AttributeIdNm, f.Category, f.FilterQuick, f.OutsideIn, f.AutoSelect); err != nil {
return err
}
}
// set default preset IDs
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
DELETE FROM app.field_data_relationship_preset
WHERE field_id = $1
`, fieldId); err != nil {
return err
}
- for _, presetId := range defPresetIds {
- if _, err := tx.Exec(db.Ctx, `
+ for _, presetId := range f.DefPresetIds {
+ if _, err := tx.Exec(ctx, `
INSERT INTO app.field_data_relationship_preset (field_id, preset_id)
VALUES ($1,$2)
`, fieldId, presetId); err != nil {
@@ -1036,87 +1141,157 @@ func setDataRelationship_tx(tx pgx.Tx, fieldId uuid.UUID, formIdOpen pgtype.UUID
}
}
- // fix imports < 2.6: New open form entity
- oForm = compatible.FixMissingOpenForm(formIdOpen, attributeIdRecord, oForm)
-
// set open form
- if err := openForm.Set_tx(tx, "field", fieldId, oForm); err != nil {
+ if err := openForm.Set_tx(ctx, tx, "field", fieldId, f.OpenForm, pgtype.Text{}); err != nil {
return err
}
- return column.Set_tx(tx, "field", fieldId, columns)
+ return column.Set_tx(ctx, tx, "field", fieldId, f.Columns)
}
-func setHeader_tx(tx pgx.Tx, fieldId uuid.UUID, size int) error {
+func setHeader_tx(ctx context.Context, tx pgx.Tx, fieldId uuid.UUID, f types.FieldHeader) error {
- known, err := schema.CheckCreateId_tx(tx, &fieldId, "field_header", "field_id")
+ known, err := schema.CheckCreateId_tx(ctx, tx, &fieldId, "field_header", "field_id")
if err != nil {
return err
}
if known {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
UPDATE app.field_header
- SET size = $1
- WHERE field_id = $2
- `, size, fieldId); err != nil {
+ SET richtext = $1, size = $2
+ WHERE field_id = $3
+ `, f.Richtext, f.Size, fieldId); err != nil {
return err
}
} else {
- if _, err := tx.Exec(db.Ctx, `
- INSERT INTO app.field_header (field_id, size)
- VALUES ($1,$2)
- `, fieldId, size); err != nil {
+ if _, err := tx.Exec(ctx, `
+ INSERT INTO app.field_header (field_id, richtext, size)
+ VALUES ($1,$2,$3)
+ `, fieldId, f.Richtext, f.Size); err != nil {
return err
}
}
return nil
}
-func setList_tx(tx pgx.Tx, fieldId uuid.UUID, attributeIdRecord pgtype.UUID,
- formIdOpen pgtype.UUID, autoRenew pgtype.Int4, csvExport bool, csvImport bool,
- layout string, filterQuick bool, resultLimit int, columns []types.Column,
- collections []types.CollectionConsumer, oForm types.OpenForm) error {
+func setKanban_tx(ctx context.Context, tx pgx.Tx, fieldId uuid.UUID, f types.FieldKanban) error {
- known, err := schema.CheckCreateId_tx(tx, &fieldId, "field_list", "field_id")
+ known, err := schema.CheckCreateId_tx(ctx, tx, &fieldId, "field_kanban", "field_id")
if err != nil {
return err
}
+ if f.RelationIndexData == f.RelationIndexAxisX {
+ return errors.New("a separate relation must be chosen for Kanban columns")
+ }
+ if f.RelationIndexAxisY.Valid && int(f.RelationIndexAxisY.Int16) == f.RelationIndexAxisX {
+ return errors.New("relations for Kanban columns & rows must be different")
+ }
+
if known {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
+ UPDATE app.field_kanban
+ SET relation_index_data = $1, relation_index_axis_x = $2,
+ relation_index_axis_y = $3, attribute_id_sort = $4
+ WHERE field_id = $5
+ `, f.RelationIndexData, f.RelationIndexAxisX, f.RelationIndexAxisY,
+ f.AttributeIdSort, fieldId); err != nil {
+
+ return err
+ }
+ } else {
+ if _, err := tx.Exec(ctx, `
+ INSERT INTO app.field_kanban (
+ field_id, relation_index_data, relation_index_axis_x,
+ relation_index_axis_y, attribute_id_sort
+ )
+ VALUES ($1,$2,$3,$4,$5)
+ `, fieldId, f.RelationIndexData, f.RelationIndexAxisX,
+ f.RelationIndexAxisY, f.AttributeIdSort); err != nil {
+
+ return err
+ }
+ }
+
+ // set open form
+ if err := openForm.Set_tx(ctx, tx, "field", fieldId, f.OpenForm, pgtype.Text{}); err != nil {
+ return err
+ }
+
+ // set collection consumer
+ if err := consumer.Set_tx(ctx, tx, "field", fieldId, "fieldFilterSelector", f.Collections); err != nil {
+ return err
+ }
+
+ // set columns
+ return column.Set_tx(ctx, tx, "field", fieldId, f.Columns)
+}
+func setList_tx(ctx context.Context, tx pgx.Tx, fieldId uuid.UUID, f types.FieldList) error {
+
+ known, err := schema.CheckCreateId_tx(ctx, tx, &fieldId, "field_list", "field_id")
+ if err != nil {
+ return err
+ }
+
+ if known {
+ if _, err := tx.Exec(ctx, `
UPDATE app.field_list
SET auto_renew = $1, csv_export = $2, csv_import = $3, layout = $4,
filter_quick = $5, result_limit = $6
WHERE field_id = $7
- `, autoRenew, csvExport, csvImport, layout,
- filterQuick, resultLimit, fieldId); err != nil {
-
+ `, f.AutoRenew, f.CsvExport, f.CsvImport, f.Layout, f.FilterQuick, f.ResultLimit, fieldId); err != nil {
return err
}
} else {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
INSERT INTO app.field_list (
field_id, auto_renew, csv_export, csv_import,
layout, filter_quick, result_limit
)
VALUES ($1,$2,$3,$4,$5,$6,$7)
- `, fieldId, autoRenew, csvExport, csvImport,
- layout, filterQuick, resultLimit); err != nil {
-
+ `, fieldId, f.AutoRenew, f.CsvExport, f.CsvImport, f.Layout, f.FilterQuick, f.ResultLimit); err != nil {
return err
}
}
- // fix imports < 2.6: New open form entity
- oForm = compatible.FixMissingOpenForm(formIdOpen, attributeIdRecord, oForm)
- // set open form
- if err := openForm.Set_tx(tx, "field", fieldId, oForm); err != nil {
+ // set open forms
+ if err := openForm.Set_tx(ctx, tx, "field", fieldId, f.OpenForm, pgtype.Text{}); err != nil {
+ return err
+ }
+ if err := openForm.Set_tx(ctx, tx, "field", fieldId, f.OpenFormBulk, pgtype.Text{String: "bulk", Valid: true}); err != nil {
return err
}
// set collection consumer
- if err := consumer.Set_tx(tx, "field", fieldId, "fieldFilterSelector", collections); err != nil {
+ if err := consumer.Set_tx(ctx, tx, "field", fieldId, "fieldFilterSelector", f.Collections); err != nil {
+ return err
+ }
+
+ // set columns
+ return column.Set_tx(ctx, tx, "field", fieldId, f.Columns)
+}
+func setVariable_tx(ctx context.Context, tx pgx.Tx, fieldId uuid.UUID, f types.FieldVariable) error {
+
+ known, err := schema.CheckCreateId_tx(ctx, tx, &fieldId, "field_variable", "field_id")
+ if err != nil {
return err
}
+ if known {
+ if _, err := tx.Exec(ctx, `
+ UPDATE app.field_variable
+ SET variable_id = $1, js_function_id = $2, clipboard = $3
+ WHERE field_id = $4
+ `, f.VariableId, f.JsFunctionId, f.Clipboard, fieldId); err != nil {
+ return err
+ }
+ } else {
+ if _, err := tx.Exec(ctx, `
+ INSERT INTO app.field_variable (field_id, variable_id, js_function_id, clipboard)
+ VALUES ($1,$2,$3,$4)
+ `, fieldId, f.VariableId, f.JsFunctionId, f.Clipboard); err != nil {
+ return err
+ }
+ }
+
// set columns
- return column.Set_tx(tx, "field", fieldId, columns)
+ return column.Set_tx(ctx, tx, "field", fieldId, f.Columns)
}
diff --git a/schema/form/form.go b/schema/form/form.go
index e5c7d98f..349b5d78 100644
--- a/schema/form/form.go
+++ b/schema/form/form.go
@@ -1,14 +1,14 @@
package form
import (
+ "context"
"encoding/json"
"errors"
"fmt"
- "r3/compatible"
- "r3/db"
"r3/schema"
"r3/schema/article"
"r3/schema/caption"
+ "r3/schema/compatible"
"r3/schema/field"
"r3/schema/query"
"r3/types"
@@ -19,9 +19,9 @@ import (
"github.com/jackc/pgx/v5/pgtype"
)
-func Copy_tx(tx pgx.Tx, moduleId uuid.UUID, id uuid.UUID, newName string) error {
+func Copy_tx(ctx context.Context, tx pgx.Tx, moduleId uuid.UUID, id uuid.UUID, newName string) error {
- forms, err := Get(uuid.Nil, []uuid.UUID{id})
+ forms, err := Get_tx(ctx, tx, uuid.Nil, []uuid.UUID{id})
if err != nil {
return err
}
@@ -30,6 +30,8 @@ func Copy_tx(tx pgx.Tx, moduleId uuid.UUID, id uuid.UUID, newName string) error
return errors.New("form copy target does not exist")
}
form := forms[0]
+ form.Name = newName
+ form.ModuleId = moduleId
// replace IDs with new ones
// keep association between old (replaced) and new ID
@@ -45,9 +47,13 @@ func Copy_tx(tx pgx.Tx, moduleId uuid.UUID, id uuid.UUID, newName string) error
return err
}
- // remove form functions (cannot be copied without recreating all functions)
+ // remove form actions & functions (cannot be copied without recreating referenced functions)
+ form.Actions = make([]types.FormAction, 0)
form.Functions = make([]types.FormFunction, 0)
+ // remove field focus (copy not supported)
+ form.FieldIdFocus = pgtype.UUID{}
+
// replace IDs from fields as well as their (sub)queries, columns, etc.
// run twice: once for all field IDs and again to update dependent field sub entities
// example: filters from columns (sub queries) or other fields (list queries) can reference field IDs
@@ -58,7 +64,7 @@ func Copy_tx(tx pgx.Tx, moduleId uuid.UUID, id uuid.UUID, newName string) error
// replace IDs inside fields
// first run: field IDs
// second run: IDs for (sub)queries, columns, tabs
- fieldIf, err = replaceFieldIds(fieldIf, idMapReplaced, runs == 0)
+ fieldIf, err = replaceFieldIds(ctx, tx, fieldIf, idMapReplaced, runs == 0)
if err != nil {
return err
}
@@ -114,17 +120,31 @@ func Copy_tx(tx pgx.Tx, moduleId uuid.UUID, id uuid.UUID, newName string) error
}
}
}
- return Set_tx(tx, moduleId, form.Id, form.PresetIdOpen, form.IconId, newName,
- form.NoDataActions, form.Query, form.Fields, form.Functions, form.States,
- form.ArticleIdsHelp, form.Captions)
+
+ // replace state IDs in condition filters
+ for i, state := range form.States {
+ for j, c := range state.Conditions {
+ if c.Side0.FormStateId.Valid {
+ if id, exists := idMapReplaced[c.Side0.FormStateId.Bytes]; exists {
+ form.States[i].Conditions[j].Side0.FormStateId.Bytes = id
+ }
+ }
+ if c.Side1.FormStateId.Valid {
+ if id, exists := idMapReplaced[c.Side1.FormStateId.Bytes]; exists {
+ form.States[i].Conditions[j].Side1.FormStateId.Bytes = id
+ }
+ }
+ }
+ }
+ return Set_tx(ctx, tx, form)
}
-func Del_tx(tx pgx.Tx, id uuid.UUID) error {
- _, err := tx.Exec(db.Ctx, "DELETE FROM app.form WHERE id = $1", id)
+func Del_tx(ctx context.Context, tx pgx.Tx, id uuid.UUID) error {
+ _, err := tx.Exec(ctx, "DELETE FROM app.form WHERE id = $1", id)
return err
}
-func Get(moduleId uuid.UUID, ids []uuid.UUID) ([]types.Form, error) {
+func Get_tx(ctx context.Context, tx pgx.Tx, moduleId uuid.UUID, ids []uuid.UUID) ([]types.Form, error) {
forms := make([]types.Form, 0)
sqlWheres := []string{}
@@ -142,8 +162,8 @@ func Get(moduleId uuid.UUID, ids []uuid.UUID) ([]types.Form, error) {
sqlValues = append(sqlValues, ids)
}
- rows, err := db.Pool.Query(db.Ctx, fmt.Sprintf(`
- SELECT id, preset_id_open, icon_id, name, no_data_actions, ARRAY(
+ rows, err := tx.Query(ctx, fmt.Sprintf(`
+ SELECT id, preset_id_open, icon_id, field_id_focus, name, no_data_actions, ARRAY(
SELECT article_id
FROM app.article_form
WHERE form_id = f.id
@@ -161,8 +181,8 @@ func Get(moduleId uuid.UUID, ids []uuid.UUID) ([]types.Form, error) {
for rows.Next() {
var f types.Form
- if err := rows.Scan(&f.Id, &f.PresetIdOpen, &f.IconId, &f.Name,
- &f.NoDataActions, &f.ArticleIdsHelp); err != nil {
+ if err := rows.Scan(&f.Id, &f.PresetIdOpen, &f.IconId, &f.FieldIdFocus,
+ &f.Name, &f.NoDataActions, &f.ArticleIdsHelp); err != nil {
return forms, err
}
@@ -173,23 +193,27 @@ func Get(moduleId uuid.UUID, ids []uuid.UUID) ([]types.Form, error) {
// collect form query, fields, functions, states and captions
for i, form := range forms {
- form.Query, err = query.Get("form", form.Id, 0, 0)
+ form.Query, err = query.Get_tx(ctx, tx, "form", form.Id, 0, 0, 0)
if err != nil {
return forms, err
}
- form.Fields, err = field.Get(form.Id)
+ form.Fields, err = field.Get_tx(ctx, tx, form.Id)
if err != nil {
return forms, err
}
- form.Functions, err = getFunctions(form.Id)
+ form.Actions, err = getActions_tx(ctx, tx, form.Id)
if err != nil {
return forms, err
}
- form.States, err = getStates(form.Id)
+ form.Functions, err = getFunctions_tx(ctx, tx, form.Id)
if err != nil {
return forms, err
}
- form.Captions, err = caption.Get("form", form.Id, []string{"formTitle"})
+ form.States, err = getStates_tx(ctx, tx, form.Id)
+ if err != nil {
+ return forms, err
+ }
+ form.Captions, err = caption.Get_tx(ctx, tx, "form", form.Id, []string{"formTitle"})
if err != nil {
return forms, err
}
@@ -198,44 +222,48 @@ func Get(moduleId uuid.UUID, ids []uuid.UUID) ([]types.Form, error) {
return forms, nil
}
-func Set_tx(tx pgx.Tx, moduleId uuid.UUID, id uuid.UUID, presetIdOpen pgtype.UUID,
- iconId pgtype.UUID, name string, noDataActions bool, queryIn types.Query,
- fields []interface{}, functions []types.FormFunction, states []types.FormState,
- articleIdsHelp []uuid.UUID, captions types.CaptionMap) error {
+func Set_tx(ctx context.Context, tx pgx.Tx, frm types.Form) error {
+
+ // remove only invalid character (dot), used for form function references
+ frm.Name = strings.Replace(frm.Name, ".", "", -1)
- known, err := schema.CheckCreateId_tx(tx, &id, "form", "id")
+ known, err := schema.CheckCreateId_tx(ctx, tx, &frm.Id, "form", "id")
if err != nil {
return err
}
if known {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
UPDATE app.form
- SET preset_id_open = $1, icon_id = $2, name = $3, no_data_actions = $4
- WHERE id = $5
- `, presetIdOpen, iconId, name, noDataActions, id); err != nil {
+ SET preset_id_open = $1, icon_id = $2, field_id_focus = $3,
+ name = $4, no_data_actions = $5
+ WHERE id = $6
+ `, frm.PresetIdOpen, frm.IconId, frm.FieldIdFocus,
+ frm.Name, frm.NoDataActions, frm.Id); err != nil {
+
return err
}
} else {
- if _, err := tx.Exec(db.Ctx, `
- INSERT INTO app.form (
- id, module_id, preset_id_open, icon_id, name, no_data_actions
- )
- VALUES ($1,$2,$3,$4,$5,$6)
- `, id, moduleId, presetIdOpen, iconId, name, noDataActions); err != nil {
+ if _, err := tx.Exec(ctx, `
+ INSERT INTO app.form (id, module_id, preset_id_open, icon_id,
+ field_id_focus, name, no_data_actions)
+ VALUES ($1,$2,$3,$4,$5,$6,$7)
+ `, frm.Id, frm.ModuleId, frm.PresetIdOpen, frm.IconId,
+ frm.FieldIdFocus, frm.Name, frm.NoDataActions); err != nil {
+
return err
}
}
// set form query
- if err := query.Set_tx(tx, "form", id, 0, 0, queryIn); err != nil {
+ if err := query.Set_tx(ctx, tx, "form", frm.Id, 0, 0, 0, frm.Query); err != nil {
return err
}
// set fields (recursive)
fieldIdMapQuery := make(map[uuid.UUID]types.Query)
- if err := field.Set_tx(tx, id, pgtype.UUID{}, pgtype.UUID{},
- fields, fieldIdMapQuery); err != nil {
+ if err := field.Set_tx(ctx, tx, frm.Id, pgtype.UUID{}, pgtype.UUID{},
+ frm.Fields, fieldIdMapQuery); err != nil {
return err
}
@@ -243,37 +271,35 @@ func Set_tx(tx pgx.Tx, moduleId uuid.UUID, id uuid.UUID, presetIdOpen pgtype.UUI
// set field queries after fields themselves
// query filters can reference fields so they must all exist
for fieldId, queryIn := range fieldIdMapQuery {
- if err := query.Set_tx(tx, "field", fieldId, 0, 0, queryIn); err != nil {
+ if err := query.Set_tx(ctx, tx, "field", fieldId, 0, 0, 0, queryIn); err != nil {
return err
}
}
- // set form functions
- if err := setFunctions_tx(tx, id, functions); err != nil {
+ if err := setActions_tx(ctx, tx, frm.Id, frm.Actions); err != nil {
return err
}
-
- // set form states
- if err := setStates_tx(tx, id, states); err != nil {
+ if err := setFunctions_tx(ctx, tx, frm.Id, frm.Functions); err != nil {
return err
}
-
- // set help articles
- if err := article.Assign_tx(tx, "form", id, articleIdsHelp); err != nil {
+ if err := setStates_tx(ctx, tx, frm.Id, frm.States); err != nil {
+ return err
+ }
+ if err := article.Assign_tx(ctx, tx, "form", frm.Id, frm.ArticleIdsHelp); err != nil {
return err
}
-
- // set form captions
// fix imports < 3.2: Migration from help captions to help articles
- captions, err = compatible.FixCaptions_tx(tx, "form", id, captions)
+ frm.Captions, err = compatible.FixCaptions_tx(ctx, tx, "form", frm.Id, frm.Captions)
if err != nil {
return err
}
- return caption.Set_tx(tx, id, captions)
+ return caption.Set_tx(ctx, tx, frm.Id, frm.Captions)
}
// form duplication
-func replaceFieldIds(fieldIf interface{}, idMapReplaced map[uuid.UUID]uuid.UUID, setFieldIds bool) (interface{}, error) {
+func replaceFieldIds(ctx context.Context, tx pgx.Tx, fieldIf interface{},
+ idMapReplaced map[uuid.UUID]uuid.UUID, setFieldIds bool) (interface{}, error) {
+
var err error
// replace form ID to open if it was replaced (field opening its own form)
@@ -305,6 +331,17 @@ func replaceFieldIds(fieldIf interface{}, idMapReplaced map[uuid.UUID]uuid.UUID,
} else {
field.OpenForm = replaceOpenForm(field.OpenForm)
}
+
+ // remove references to form bound entities that do not exist after form copy
+ if field.JsFunctionId.Valid {
+ isBound, err := schema.GetIsFormBound_tx(ctx, tx, "js_function", field.JsFunctionId.Bytes)
+ if err != nil {
+ return nil, err
+ }
+ if isBound {
+ field.JsFunctionId = pgtype.UUID{}
+ }
+ }
fieldIf = field
case types.FieldCalendar:
@@ -355,7 +392,7 @@ func replaceFieldIds(fieldIf interface{}, idMapReplaced map[uuid.UUID]uuid.UUID,
}
}
for i, _ := range field.Fields {
- field.Fields[i], err = replaceFieldIds(field.Fields[i], idMapReplaced, setFieldIds)
+ field.Fields[i], err = replaceFieldIds(ctx, tx, field.Fields[i], idMapReplaced, setFieldIds)
if err != nil {
return nil, err
}
@@ -371,6 +408,17 @@ func replaceFieldIds(fieldIf interface{}, idMapReplaced map[uuid.UUID]uuid.UUID,
} else {
field.DefCollection = replaceCollectionConsumer(field.DefCollection)
}
+
+ // remove references to form bound entities that do not exist after form copy
+ if field.JsFunctionId.Valid {
+ isBound, err := schema.GetIsFormBound_tx(ctx, tx, "js_function", field.JsFunctionId.Bytes)
+ if err != nil {
+ return nil, err
+ }
+ if isBound {
+ field.JsFunctionId = pgtype.UUID{}
+ }
+ }
fieldIf = field
case types.FieldDataRelationship:
@@ -391,6 +439,17 @@ func replaceFieldIds(fieldIf interface{}, idMapReplaced map[uuid.UUID]uuid.UUID,
}
field.DefCollection = replaceCollectionConsumer(field.DefCollection)
}
+
+ // remove references to form bound entities that do not exist after form copy
+ if field.JsFunctionId.Valid {
+ isBound, err := schema.GetIsFormBound_tx(ctx, tx, "js_function", field.JsFunctionId.Bytes)
+ if err != nil {
+ return nil, err
+ }
+ if isBound {
+ field.JsFunctionId = pgtype.UUID{}
+ }
+ }
fieldIf = field
case types.FieldHeader:
@@ -402,6 +461,28 @@ func replaceFieldIds(fieldIf interface{}, idMapReplaced map[uuid.UUID]uuid.UUID,
}
fieldIf = field
+ case types.FieldKanban:
+ if setFieldIds {
+ field.Id, err = schema.ReplaceUuid(field.Id, idMapReplaced)
+ if err != nil {
+ return nil, err
+ }
+ } else {
+ field.OpenForm = replaceOpenForm(field.OpenForm)
+ field.Columns, err = schema.ReplaceColumnIds(field.Columns, idMapReplaced)
+ if err != nil {
+ return nil, err
+ }
+ field.Query, err = schema.ReplaceQueryIds(field.Query, idMapReplaced)
+ if err != nil {
+ return nil, err
+ }
+ for i, _ := range field.Collections {
+ field.Collections[i] = replaceCollectionConsumer(field.Collections[i])
+ }
+ }
+ fieldIf = field
+
case types.FieldList:
if setFieldIds {
field.Id, err = schema.ReplaceUuid(field.Id, idMapReplaced)
@@ -410,6 +491,7 @@ func replaceFieldIds(fieldIf interface{}, idMapReplaced map[uuid.UUID]uuid.UUID,
}
} else {
field.OpenForm = replaceOpenForm(field.OpenForm)
+ field.OpenFormBulk = replaceOpenForm(field.OpenFormBulk)
field.Columns, err = schema.ReplaceColumnIds(field.Columns, idMapReplaced)
if err != nil {
return nil, err
@@ -441,7 +523,7 @@ func replaceFieldIds(fieldIf interface{}, idMapReplaced map[uuid.UUID]uuid.UUID,
}
for i, tab := range field.Tabs {
for fi, _ := range tab.Fields {
- tab.Fields[fi], err = replaceFieldIds(tab.Fields[fi], idMapReplaced, setFieldIds)
+ tab.Fields[fi], err = replaceFieldIds(ctx, tx, tab.Fields[fi], idMapReplaced, setFieldIds)
if err != nil {
return nil, err
}
@@ -450,8 +532,46 @@ func replaceFieldIds(fieldIf interface{}, idMapReplaced map[uuid.UUID]uuid.UUID,
}
fieldIf = field
+ case types.FieldVariable:
+ if setFieldIds {
+ field.Id, err = schema.ReplaceUuid(field.Id, idMapReplaced)
+ if err != nil {
+ return nil, err
+ }
+ } else {
+ field.Columns, err = schema.ReplaceColumnIds(field.Columns, idMapReplaced)
+ if err != nil {
+ return nil, err
+ }
+ field.Query, err = schema.ReplaceQueryIds(field.Query, idMapReplaced)
+ if err != nil {
+ return nil, err
+ }
+ }
+
+ // remove references to form bound entities that do not exist after form copy
+ if field.JsFunctionId.Valid {
+ isBound, err := schema.GetIsFormBound_tx(ctx, tx, "js_function", field.JsFunctionId.Bytes)
+ if err != nil {
+ return nil, err
+ }
+ if isBound {
+ field.JsFunctionId = pgtype.UUID{}
+ }
+ }
+ if field.VariableId.Valid {
+ isBound, err := schema.GetIsFormBound_tx(ctx, tx, "variable", field.VariableId.Bytes)
+ if err != nil {
+ return nil, err
+ }
+ if isBound {
+ field.VariableId = pgtype.UUID{}
+ }
+ }
+ fieldIf = field
+
default:
- return nil, fmt.Errorf("unknown field type, interface: '%T'", fieldIf)
+ return nil, fmt.Errorf("unknown field type '%T'", fieldIf)
}
return fieldIf, nil
}
diff --git a/schema/form/formAction.go b/schema/form/formAction.go
new file mode 100644
index 00000000..c5f54571
--- /dev/null
+++ b/schema/form/formAction.go
@@ -0,0 +1,93 @@
+package form
+
+import (
+ "context"
+ "r3/schema"
+ "r3/schema/caption"
+ "r3/types"
+
+ "github.com/gofrs/uuid"
+ "github.com/jackc/pgx/v5"
+)
+
+func getActions_tx(ctx context.Context, tx pgx.Tx, formId uuid.UUID) ([]types.FormAction, error) {
+ actions := make([]types.FormAction, 0)
+
+ rows, err := tx.Query(ctx, `
+ SELECT id, js_function_id, icon_id, state, color
+ FROM app.form_action
+ WHERE form_id = $1
+ ORDER BY position ASC
+ `, formId)
+ if err != nil {
+ return actions, err
+ }
+ defer rows.Close()
+
+ for rows.Next() {
+ var a types.FormAction
+ if err := rows.Scan(&a.Id, &a.JsFunctionId, &a.IconId, &a.State, &a.Color); err != nil {
+ return actions, err
+ }
+ actions = append(actions, a)
+ }
+
+ for i, a := range actions {
+ actions[i].Captions, err = caption.Get_tx(ctx, tx, "form_action", a.Id, []string{"formActionTitle"})
+ if err != nil {
+ return actions, err
+ }
+ }
+ return actions, nil
+}
+
+func setActions_tx(ctx context.Context, tx pgx.Tx, formId uuid.UUID, actions []types.FormAction) error {
+ var err error
+ actionIds := make([]uuid.UUID, 0)
+
+ for i, a := range actions {
+ a.Id, err = setAction_tx(ctx, tx, formId, a, i)
+ if err != nil {
+ return err
+ }
+ actionIds = append(actionIds, a.Id)
+ }
+
+ // remove non-specified actions
+ _, err = tx.Exec(ctx, `
+ DELETE FROM app.form_action
+ WHERE form_id = $1
+ AND id <> ALL($2)
+ `, formId, actionIds)
+
+ return err
+}
+
+func setAction_tx(ctx context.Context, tx pgx.Tx, formId uuid.UUID, a types.FormAction, position int) (uuid.UUID, error) {
+
+ known, err := schema.CheckCreateId_tx(ctx, tx, &a.Id, "form_action", "id")
+ if err != nil {
+ return a.Id, err
+ }
+
+ if known {
+ if _, err := tx.Exec(ctx, `
+ UPDATE app.form_action
+ SET js_function_id = $1, icon_id = $2, position = $3, state = $4, color = $5
+ WHERE id = $6
+ `, a.JsFunctionId, a.IconId, position, a.State, a.Color, a.Id); err != nil {
+ return a.Id, err
+ }
+ } else {
+ if _, err := tx.Exec(ctx, `
+ INSERT INTO app.form_action (id, form_id, js_function_id, icon_id, position, state, color)
+ VALUES ($1,$2,$3,$4,$5,$6,$7)
+ `, a.Id, formId, a.JsFunctionId, a.IconId, position, a.State, a.Color); err != nil {
+ return a.Id, err
+ }
+ }
+ if err := caption.Set_tx(ctx, tx, a.Id, a.Captions); err != nil {
+ return a.Id, err
+ }
+ return a.Id, nil
+}
diff --git a/schema/form/formFunction.go b/schema/form/formFunction.go
index 8da3d29d..09d52eb0 100644
--- a/schema/form/formFunction.go
+++ b/schema/form/formFunction.go
@@ -1,17 +1,17 @@
package form
import (
- "r3/db"
+ "context"
"r3/types"
"github.com/gofrs/uuid"
"github.com/jackc/pgx/v5"
)
-func getFunctions(formId uuid.UUID) ([]types.FormFunction, error) {
+func getFunctions_tx(ctx context.Context, tx pgx.Tx, formId uuid.UUID) ([]types.FormFunction, error) {
fncs := make([]types.FormFunction, 0)
- rows, err := db.Pool.Query(db.Ctx, `
+ rows, err := tx.Query(ctx, `
SELECT js_function_id, event, event_before
FROM app.form_function
WHERE form_id = $1
@@ -32,9 +32,9 @@ func getFunctions(formId uuid.UUID) ([]types.FormFunction, error) {
return fncs, nil
}
-func setFunctions_tx(tx pgx.Tx, formId uuid.UUID, fncs []types.FormFunction) error {
+func setFunctions_tx(ctx context.Context, tx pgx.Tx, formId uuid.UUID, fncs []types.FormFunction) error {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
DELETE FROM app.form_function
WHERE form_id = $1
`, formId); err != nil {
@@ -42,7 +42,7 @@ func setFunctions_tx(tx pgx.Tx, formId uuid.UUID, fncs []types.FormFunction) err
}
for i, f := range fncs {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
INSERT INTO app.form_function (
form_id, position, js_function_id, event, event_before
)
diff --git a/schema/form/formState.go b/schema/form/formState.go
index f9212572..ec9fdd8a 100644
--- a/schema/form/formState.go
+++ b/schema/form/formState.go
@@ -1,8 +1,7 @@
package form
import (
- "r3/compatible"
- "r3/db"
+ "context"
"r3/schema"
"r3/types"
@@ -10,11 +9,10 @@ import (
"github.com/jackc/pgx/v5"
)
-func getStates(formId uuid.UUID) ([]types.FormState, error) {
-
+func getStates_tx(ctx context.Context, tx pgx.Tx, formId uuid.UUID) ([]types.FormState, error) {
states := make([]types.FormState, 0)
- rows, err := db.Pool.Query(db.Ctx, `
+ rows, err := tx.Query(ctx, `
SELECT id, description
FROM app.form_state
WHERE form_id = $1
@@ -25,25 +23,22 @@ func getStates(formId uuid.UUID) ([]types.FormState, error) {
if err != nil {
return states, err
}
+ defer rows.Close()
for rows.Next() {
var s types.FormState
-
if err := rows.Scan(&s.Id, &s.Description); err != nil {
return states, err
}
states = append(states, s)
}
- rows.Close()
for i, _ := range states {
-
- states[i].Conditions, err = getStateConditions(states[i].Id)
+ states[i].Conditions, err = getStateConditions_tx(ctx, tx, states[i].Id)
if err != nil {
return states, nil
}
-
- states[i].Effects, err = getStateEffects(states[i].Id)
+ states[i].Effects, err = getStateEffects_tx(ctx, tx, states[i].Id)
if err != nil {
return states, nil
}
@@ -51,10 +46,10 @@ func getStates(formId uuid.UUID) ([]types.FormState, error) {
return states, nil
}
-func getStateConditions(formStateId uuid.UUID) ([]types.FormStateCondition, error) {
+func getStateConditions_tx(ctx context.Context, tx pgx.Tx, formStateId uuid.UUID) ([]types.FormStateCondition, error) {
conditions := make([]types.FormStateCondition, 0)
- rows, err := db.Pool.Query(db.Ctx, `
+ rows, err := tx.Query(ctx, `
SELECT position, connector, operator
FROM app.form_state_condition
WHERE form_state_id = $1
@@ -63,6 +58,7 @@ func getStateConditions(formStateId uuid.UUID) ([]types.FormStateCondition, erro
if err != nil {
return conditions, err
}
+ defer rows.Close()
for rows.Next() {
var c types.FormStateCondition
@@ -71,14 +67,13 @@ func getStateConditions(formStateId uuid.UUID) ([]types.FormStateCondition, erro
}
conditions = append(conditions, c)
}
- rows.Close()
for i, c := range conditions {
- c.Side0, err = getStateConditionSide(formStateId, c.Position, 0)
+ c.Side0, err = getStateConditionSide_tx(ctx, tx, formStateId, c.Position, 0)
if err != nil {
return conditions, err
}
- c.Side1, err = getStateConditionSide(formStateId, c.Position, 1)
+ c.Side1, err = getStateConditionSide_tx(ctx, tx, formStateId, c.Position, 1)
if err != nil {
return conditions, err
}
@@ -87,29 +82,27 @@ func getStateConditions(formStateId uuid.UUID) ([]types.FormStateCondition, erro
return conditions, nil
}
-func getStateConditionSide(formStateId uuid.UUID, position int, side int) (types.FormStateConditionSide, error) {
+func getStateConditionSide_tx(ctx context.Context, tx pgx.Tx, formStateId uuid.UUID, position int, side int) (types.FormStateConditionSide, error) {
var s types.FormStateConditionSide
- err := db.Pool.QueryRow(db.Ctx, `
- SELECT collection_id, column_id, field_id, preset_id,
- role_id, brackets, content, value
+ err := tx.QueryRow(ctx, `
+ SELECT collection_id, column_id, field_id, form_state_id_result,
+ preset_id, role_id, variable_id, brackets, content, value
FROM app.form_state_condition_side
WHERE form_state_id = $1
AND form_state_condition_position = $2
AND side = $3
- `, formStateId, position, side).Scan(&s.CollectionId, &s.ColumnId,
- &s.FieldId, &s.PresetId, &s.RoleId, &s.Brackets, &s.Content,
- &s.Value)
+ `, formStateId, position, side).Scan(&s.CollectionId, &s.ColumnId, &s.FieldId, &s.FormStateId,
+ &s.PresetId, &s.RoleId, &s.VariableId, &s.Brackets, &s.Content, &s.Value)
return s, err
}
-func getStateEffects(formStateId uuid.UUID) ([]types.FormStateEffect, error) {
-
+func getStateEffects_tx(ctx context.Context, tx pgx.Tx, formStateId uuid.UUID) ([]types.FormStateEffect, error) {
effects := make([]types.FormStateEffect, 0)
- rows, err := db.Pool.Query(db.Ctx, `
- SELECT field_id, tab_id, new_state
+ rows, err := tx.Query(ctx, `
+ SELECT field_id, form_action_id, tab_id, new_data, new_state
FROM app.form_state_effect
WHERE form_state_id = $1
ORDER BY field_id ASC, tab_id ASC
@@ -121,7 +114,7 @@ func getStateEffects(formStateId uuid.UUID) ([]types.FormStateEffect, error) {
for rows.Next() {
var e types.FormStateEffect
- if err := rows.Scan(&e.FieldId, &e.TabId, &e.NewState); err != nil {
+ if err := rows.Scan(&e.FieldId, &e.FormActionId, &e.TabId, &e.NewData, &e.NewState); err != nil {
return effects, err
}
effects = append(effects, e)
@@ -130,14 +123,13 @@ func getStateEffects(formStateId uuid.UUID) ([]types.FormStateEffect, error) {
}
// set given form states, deletes non-specified states
-func setStates_tx(tx pgx.Tx, formId uuid.UUID, states []types.FormState) error {
+func setStates_tx(ctx context.Context, tx pgx.Tx, formId uuid.UUID, states []types.FormState) error {
var err error
stateIds := make([]uuid.UUID, 0)
for _, s := range states {
-
- s.Id, err = setState_tx(tx, formId, s)
+ s.Id, err = setState_tx(ctx, tx, formId, s)
if err != nil {
return err
}
@@ -145,7 +137,7 @@ func setStates_tx(tx pgx.Tx, formId uuid.UUID, states []types.FormState) error {
}
// remove non-specified states
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
DELETE FROM app.form_state
WHERE form_id = $1
AND id <> ALL($2)
@@ -156,22 +148,22 @@ func setStates_tx(tx pgx.Tx, formId uuid.UUID, states []types.FormState) error {
}
// sets new/existing form state, returns form state ID
-func setState_tx(tx pgx.Tx, formId uuid.UUID, state types.FormState) (uuid.UUID, error) {
+func setState_tx(ctx context.Context, tx pgx.Tx, formId uuid.UUID, state types.FormState) (uuid.UUID, error) {
- known, err := schema.CheckCreateId_tx(tx, &state.Id, "form_state", "id")
+ known, err := schema.CheckCreateId_tx(ctx, tx, &state.Id, "form_state", "id")
if err != nil {
return state.Id, err
}
if known {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
UPDATE app.form_state SET description = $1
WHERE id = $2
`, state.Description, state.Id); err != nil {
return state.Id, err
}
} else {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
INSERT INTO app.form_state (id, form_id, description)
VALUES ($1,$2,$3)
`, state.Id, formId, state.Description); err != nil {
@@ -180,7 +172,7 @@ func setState_tx(tx pgx.Tx, formId uuid.UUID, state types.FormState) (uuid.UUID,
}
// reset conditions
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
DELETE FROM app.form_state_condition
WHERE form_state_id = $1
`, state.Id); err != nil {
@@ -188,11 +180,7 @@ func setState_tx(tx pgx.Tx, formId uuid.UUID, state types.FormState) (uuid.UUID,
}
for i, c := range state.Conditions {
-
- // fix legacy conditions format < 2.7
- c = compatible.MigrateNewConditions(c)
-
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
INSERT INTO app.form_state_condition (
form_state_id, position, connector, operator
)
@@ -200,16 +188,16 @@ func setState_tx(tx pgx.Tx, formId uuid.UUID, state types.FormState) (uuid.UUID,
`, state.Id, i, c.Connector, c.Operator); err != nil {
return state.Id, err
}
- if err := setStateConditionSide_tx(tx, state.Id, i, 0, c.Side0); err != nil {
+ if err := setStateConditionSide_tx(ctx, tx, state.Id, i, 0, c.Side0); err != nil {
return state.Id, err
}
- if err := setStateConditionSide_tx(tx, state.Id, i, 1, c.Side1); err != nil {
+ if err := setStateConditionSide_tx(ctx, tx, state.Id, i, 1, c.Side1); err != nil {
return state.Id, err
}
}
// reset effects
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
DELETE FROM app.form_state_effect
WHERE form_state_id = $1
`, state.Id); err != nil {
@@ -217,29 +205,29 @@ func setState_tx(tx pgx.Tx, formId uuid.UUID, state types.FormState) (uuid.UUID,
}
for _, e := range state.Effects {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
INSERT INTO app.form_state_effect (
- form_state_id, field_id, tab_id, new_state
+ form_state_id, field_id, form_action_id, tab_id, new_data, new_state
)
- VALUES ($1,$2,$3,$4)
- `, state.Id, e.FieldId, e.TabId, e.NewState); err != nil {
+ VALUES ($1,$2,$3,$4,$5,$6)
+ `, state.Id, e.FieldId, e.FormActionId, e.TabId, e.NewData, e.NewState); err != nil {
return state.Id, err
}
}
return state.Id, nil
}
-func setStateConditionSide_tx(tx pgx.Tx, formStateId uuid.UUID,
+func setStateConditionSide_tx(ctx context.Context, tx pgx.Tx, formStateId uuid.UUID,
position int, side int, s types.FormStateConditionSide) error {
- _, err := tx.Exec(db.Ctx, `
+ _, err := tx.Exec(ctx, `
INSERT INTO app.form_state_condition_side (
- form_state_id, form_state_condition_position, side,
- collection_id, column_id, field_id, preset_id, role_id,
- brackets, content, value
+ form_state_id, form_state_condition_position, side, collection_id,
+ column_id, field_id, form_state_id_result, preset_id, role_id,
+ variable_id, brackets, content, value
)
- VALUES ($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11)
- `, formStateId, position, side, s.CollectionId, s.ColumnId, s.FieldId,
- s.PresetId, s.RoleId, s.Brackets, s.Content, s.Value)
+ VALUES ($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13)
+ `, formStateId, position, side, s.CollectionId, s.ColumnId, s.FieldId, s.FormStateId,
+ s.PresetId, s.RoleId, s.VariableId, s.Brackets, s.Content, s.Value)
return err
}
diff --git a/schema/icon/icon.go b/schema/icon/icon.go
index 895ce81e..f0231f21 100644
--- a/schema/icon/icon.go
+++ b/schema/icon/icon.go
@@ -1,7 +1,7 @@
package icon
import (
- "r3/db"
+ "context"
"r3/schema"
"r3/types"
@@ -9,16 +9,16 @@ import (
"github.com/jackc/pgx/v5"
)
-func Del_tx(tx pgx.Tx, id uuid.UUID) error {
- _, err := tx.Exec(db.Ctx, `DELETE FROM app.icon WHERE id = $1 `, id)
+func Del_tx(ctx context.Context, tx pgx.Tx, id uuid.UUID) error {
+ _, err := tx.Exec(ctx, `DELETE FROM app.icon WHERE id = $1 `, id)
return err
}
-func Get(moduleId uuid.UUID) ([]types.Icon, error) {
+func Get_tx(ctx context.Context, tx pgx.Tx, moduleId uuid.UUID) ([]types.Icon, error) {
icons := make([]types.Icon, 0)
- rows, err := db.Pool.Query(db.Ctx, `
+ rows, err := tx.Query(ctx, `
SELECT id, name, file
FROM app.icon
WHERE module_id = $1
@@ -41,15 +41,15 @@ func Get(moduleId uuid.UUID) ([]types.Icon, error) {
return icons, nil
}
-func Set_tx(tx pgx.Tx, moduleId uuid.UUID, id uuid.UUID, name string, file []byte, setName bool) error {
+func Set_tx(ctx context.Context, tx pgx.Tx, moduleId uuid.UUID, id uuid.UUID, name string, file []byte, setName bool) error {
- known, err := schema.CheckCreateId_tx(tx, &id, "icon", "id")
+ known, err := schema.CheckCreateId_tx(ctx, tx, &id, "icon", "id")
if err != nil {
return err
}
if known {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
UPDATE app.icon
SET file = $1
WHERE module_id = $2
@@ -58,7 +58,7 @@ func Set_tx(tx pgx.Tx, moduleId uuid.UUID, id uuid.UUID, name string, file []byt
return err
}
} else {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
INSERT INTO app.icon (id,module_id,name,file)
VALUES ($1,$2,'',$3)
`, id, moduleId, file); err != nil {
@@ -67,13 +67,13 @@ func Set_tx(tx pgx.Tx, moduleId uuid.UUID, id uuid.UUID, name string, file []byt
}
if setName {
- return SetName_tx(tx, moduleId, id, name)
+ return SetName_tx(ctx, tx, moduleId, id, name)
}
return nil
}
-func SetName_tx(tx pgx.Tx, moduleId uuid.UUID, id uuid.UUID, name string) error {
- _, err := tx.Exec(db.Ctx, `
+func SetName_tx(ctx context.Context, tx pgx.Tx, moduleId uuid.UUID, id uuid.UUID, name string) error {
+ _, err := tx.Exec(ctx, `
UPDATE app.icon
SET name = $1
WHERE module_id = $2
diff --git a/schema/jsFunction/jsFunction.go b/schema/jsFunction/jsFunction.go
index e6c12b28..86f61bb9 100644
--- a/schema/jsFunction/jsFunction.go
+++ b/schema/jsFunction/jsFunction.go
@@ -1,16 +1,16 @@
package jsFunction
import (
+ "context"
"fmt"
- "r3/db"
"r3/schema"
"r3/schema/caption"
- "r3/tools"
"r3/types"
"regexp"
+ "slices"
+ "strings"
"github.com/gofrs/uuid"
- "github.com/jackc/pgx/v5/pgtype"
"github.com/jackc/pgx/v5"
)
@@ -19,44 +19,44 @@ var (
rxUuid = `[a-z0-9\-]{36}` // naive regex for UUIDv4 format
)
-func Del_tx(tx pgx.Tx, id uuid.UUID) error {
- _, err := tx.Exec(db.Ctx, `
+func Del_tx(ctx context.Context, tx pgx.Tx, id uuid.UUID) error {
+ _, err := tx.Exec(ctx, `
DELETE FROM app.js_function
WHERE id = $1
`, id)
return err
}
-func Get(moduleId uuid.UUID) ([]types.JsFunction, error) {
+func Get_tx(ctx context.Context, tx pgx.Tx, moduleId uuid.UUID) ([]types.JsFunction, error) {
var err error
functions := make([]types.JsFunction, 0)
- rows, err := db.Pool.Query(db.Ctx, `
- SELECT id, form_id, name, code_args, code_function, code_returns
+ rows, err := tx.Query(ctx, `
+ SELECT id, form_id, name, code_args, code_function, code_returns, is_client_event_exec
FROM app.js_function
WHERE module_id = $1
- ORDER BY name ASC
+ ORDER BY form_id ASC, name ASC -- sort by both as name is only in unique in combination
`, moduleId)
if err != nil {
return functions, err
}
+ defer rows.Close()
for rows.Next() {
var f types.JsFunction
- if err := rows.Scan(&f.Id, &f.FormId, &f.Name,
- &f.CodeArgs, &f.CodeFunction, &f.CodeReturns); err != nil {
+ if err := rows.Scan(&f.Id, &f.FormId, &f.Name, &f.CodeArgs,
+ &f.CodeFunction, &f.CodeReturns, &f.IsClientEventExec); err != nil {
return functions, err
}
functions = append(functions, f)
}
- rows.Close()
for i, f := range functions {
f.ModuleId = moduleId
- f.Captions, err = caption.Get("js_function", f.Id, []string{"jsFunctionTitle", "jsFunctionDesc"})
+ f.Captions, err = caption.Get_tx(ctx, tx, "js_function", f.Id, []string{"jsFunctionTitle", "jsFunctionDesc"})
if err != nil {
return functions, err
}
@@ -64,75 +64,79 @@ func Get(moduleId uuid.UUID) ([]types.JsFunction, error) {
}
return functions, nil
}
-func Set_tx(tx pgx.Tx, moduleId uuid.UUID, id uuid.UUID, formId pgtype.UUID,
- name string, codeArgs string, codeFunction string, codeReturns string,
- captions types.CaptionMap) error {
+func Set_tx(ctx context.Context, tx pgx.Tx, fnc types.JsFunction) error {
- if name == "" {
+ // remove only invalid character (dot), used for form function references
+ fnc.Name = strings.Replace(fnc.Name, ".", "", -1)
+
+ if fnc.Name == "" {
return fmt.Errorf("function name must not be empty")
}
- known, err := schema.CheckCreateId_tx(tx, &id, "js_function", "id")
+ known, err := schema.CheckCreateId_tx(ctx, tx, &fnc.Id, "js_function", "id")
if err != nil {
return err
}
if known {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
UPDATE app.js_function
- SET name = $1, code_args = $2, code_function = $3, code_returns = $4
- WHERE id = $5
- `, name, codeArgs, codeFunction, codeReturns, id); err != nil {
+ SET name = $1, code_args = $2, code_function = $3, code_returns = $4, is_client_event_exec = $5
+ WHERE id = $6
+ `, fnc.Name, fnc.CodeArgs, fnc.CodeFunction, fnc.CodeReturns, fnc.IsClientEventExec, fnc.Id); err != nil {
return err
}
} else {
- if _, err := tx.Exec(db.Ctx, `
- INSERT INTO app.js_function (id, module_id,
- form_id, name, code_args, code_function, code_returns)
- VALUES ($1,$2,$3,$4,$5,$6,$7)
- `, id, moduleId, formId, name, codeArgs, codeFunction, codeReturns); err != nil {
+ if _, err := tx.Exec(ctx, `
+ INSERT INTO app.js_function (id, module_id, form_id, name,
+ code_args, code_function, code_returns, is_client_event_exec)
+ VALUES ($1,$2,$3,$4,$5,$6,$7,$8)
+ `, fnc.Id, fnc.ModuleId, fnc.FormId, fnc.Name, fnc.CodeArgs, fnc.CodeFunction, fnc.CodeReturns, fnc.IsClientEventExec); err != nil {
return err
}
}
// set captions
- if err := caption.Set_tx(tx, id, captions); err != nil {
+ if err := caption.Set_tx(ctx, tx, fnc.Id, fnc.Captions); err != nil {
return err
}
// set dependencies
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
DELETE FROM app.js_function_depends
WHERE js_function_id = $1
- `, id); err != nil {
+ `, fnc.Id); err != nil {
return err
}
- if err := storeDependencies_tx(tx, id, "collection", fmt.Sprintf(`%s\.collection_(read|update)\('(%s)'`, rxPrefix, rxUuid), 2, codeFunction); err != nil {
+ if err := storeDependencies_tx(ctx, tx, fnc.Id, "collection", fmt.Sprintf(`%s\.collection_(read|update)\('(%s)'`, rxPrefix, rxUuid), 2, fnc.CodeFunction); err != nil {
+ return err
+ }
+ if err := storeDependencies_tx(ctx, tx, fnc.Id, "field", fmt.Sprintf(`%s\.(get|set)_field_(value|value_changed|caption|chart|error|focus|order|file_links)\('(%s)'`, rxPrefix, rxUuid), 3, fnc.CodeFunction); err != nil {
return err
}
- if err := storeDependencies_tx(tx, id, "field", fmt.Sprintf(`%s\.(get|set)_field_(value|caption)\('(%s)'`, rxPrefix, rxUuid), 3, codeFunction); err != nil {
+ if err := storeDependencies_tx(ctx, tx, fnc.Id, "js_function", fmt.Sprintf(`%s\.call_frontend\('(%s)'`, rxPrefix, rxUuid), 1, fnc.CodeFunction); err != nil {
return err
}
- if err := storeDependencies_tx(tx, id, "js_function", fmt.Sprintf(`%s\.call_frontend\('(%s)'`, rxPrefix, rxUuid), 1, codeFunction); err != nil {
+ if err := storeDependencies_tx(ctx, tx, fnc.Id, "pg_function", fmt.Sprintf(`%s\.call_backend\('(%s)'`, rxPrefix, rxUuid), 1, fnc.CodeFunction); err != nil {
return err
}
- if err := storeDependencies_tx(tx, id, "pg_function", fmt.Sprintf(`%s\.call_backend\('(%s)'`, rxPrefix, rxUuid), 1, codeFunction); err != nil {
+ if err := storeDependencies_tx(ctx, tx, fnc.Id, "form", fmt.Sprintf(`%s\.open_form\('(%s)'`, rxPrefix, rxUuid), 1, fnc.CodeFunction); err != nil {
return err
}
- if err := storeDependencies_tx(tx, id, "form", fmt.Sprintf(`%s\.open_form\('(%s)'`, rxPrefix, rxUuid), 1, codeFunction); err != nil {
+ if err := storeDependencies_tx(ctx, tx, fnc.Id, "role", fmt.Sprintf(`%s\.has_role\('(%s)'`, rxPrefix, rxUuid), 1, fnc.CodeFunction); err != nil {
return err
}
- if err := storeDependencies_tx(tx, id, "role", fmt.Sprintf(`%s\.has_role\('(%s)'`, rxPrefix, rxUuid), 1, codeFunction); err != nil {
+ if err := storeDependencies_tx(ctx, tx, fnc.Id, "variable", fmt.Sprintf(`%s\.(get|set)_variable\('(%s)'`, rxPrefix, rxUuid), 2, fnc.CodeFunction); err != nil {
return err
}
return nil
}
-func storeDependencies_tx(tx pgx.Tx, functionId uuid.UUID, entity string,
+func storeDependencies_tx(ctx context.Context, tx pgx.Tx, functionId uuid.UUID, entity string,
regex string, submatchIndexId int, body string) error {
- if !tools.StringInSlice(entity, []string{"collection", "field", "form", "js_function", "pg_function", "role"}) {
+ if !slices.Contains([]string{"collection", "field", "form", "js_function", "pg_function", "role", "variable"}, entity) {
return fmt.Errorf("unknown JS function dependency '%s'", entity)
}
@@ -157,7 +161,7 @@ func storeDependencies_tx(tx pgx.Tx, functionId uuid.UUID, entity string,
}
idMap[id] = true
- if _, err := tx.Exec(db.Ctx, fmt.Sprintf(`
+ if _, err := tx.Exec(ctx, fmt.Sprintf(`
INSERT INTO app.js_function_depends (js_function_id, %s_id_on)
VALUES ($1,$2)
`, entity), functionId, id); err != nil {
diff --git a/schema/loginForm/loginForm.go b/schema/loginForm/loginForm.go
index fcee1e69..85451749 100644
--- a/schema/loginForm/loginForm.go
+++ b/schema/loginForm/loginForm.go
@@ -1,7 +1,7 @@
package loginForm
import (
- "r3/db"
+ "context"
"r3/schema"
"r3/schema/caption"
"r3/types"
@@ -10,15 +10,15 @@ import (
"github.com/jackc/pgx/v5"
)
-func Del_tx(tx pgx.Tx, id uuid.UUID) error {
- _, err := tx.Exec(db.Ctx, `DELETE FROM app.login_form WHERE id = $1`, id)
+func Del_tx(ctx context.Context, tx pgx.Tx, id uuid.UUID) error {
+ _, err := tx.Exec(ctx, `DELETE FROM app.login_form WHERE id = $1`, id)
return err
}
-func Get(moduleId uuid.UUID) ([]types.LoginForm, error) {
+func Get_tx(ctx context.Context, tx pgx.Tx, moduleId uuid.UUID) ([]types.LoginForm, error) {
loginForms := make([]types.LoginForm, 0)
- rows, err := db.Pool.Query(db.Ctx, `
+ rows, err := tx.Query(ctx, `
SELECT id, attribute_id_login, attribute_id_lookup, form_id, name
FROM app.login_form
WHERE module_id = $1
@@ -27,6 +27,7 @@ func Get(moduleId uuid.UUID) ([]types.LoginForm, error) {
if err != nil {
return loginForms, err
}
+ defer rows.Close()
for rows.Next() {
var l types.LoginForm
@@ -38,30 +39,28 @@ func Get(moduleId uuid.UUID) ([]types.LoginForm, error) {
l.ModuleId = moduleId
loginForms = append(loginForms, l)
}
- rows.Close()
// get captions
for i, l := range loginForms {
- l.Captions, err = caption.Get("login_form", l.Id, []string{"loginFormTitle"})
+ loginForms[i].Captions, err = caption.Get_tx(ctx, tx, "login_form", l.Id, []string{"loginFormTitle"})
if err != nil {
return loginForms, err
}
- loginForms[i] = l
}
return loginForms, nil
}
-func Set_tx(tx pgx.Tx, moduleId uuid.UUID, id uuid.UUID,
+func Set_tx(ctx context.Context, tx pgx.Tx, moduleId uuid.UUID, id uuid.UUID,
attributeIdLogin uuid.UUID, attributeIdLookup uuid.UUID, formId uuid.UUID,
name string, captions types.CaptionMap) error {
- known, err := schema.CheckCreateId_tx(tx, &id, "login_form", "id")
+ known, err := schema.CheckCreateId_tx(ctx, tx, &id, "login_form", "id")
if err != nil {
return err
}
if known {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
UPDATE app.login_form
SET attribute_id_login = $1, attribute_id_lookup = $2,
form_id = $3, name = $4
@@ -70,7 +69,7 @@ func Set_tx(tx pgx.Tx, moduleId uuid.UUID, id uuid.UUID,
return err
}
} else {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
INSERT INTO app.login_form (
id,module_id,attribute_id_login,attribute_id_lookup,form_id,name
)
@@ -81,7 +80,7 @@ func Set_tx(tx pgx.Tx, moduleId uuid.UUID, id uuid.UUID,
}
// set captions
- if err := caption.Set_tx(tx, id, captions); err != nil {
+ if err := caption.Set_tx(ctx, tx, id, captions); err != nil {
return err
}
return nil
diff --git a/schema/lookups.go b/schema/lookups.go
index a07a477c..59259e50 100644
--- a/schema/lookups.go
+++ b/schema/lookups.go
@@ -1,16 +1,16 @@
package schema
import (
+ "context"
"fmt"
- "r3/db"
"github.com/gofrs/uuid"
"github.com/jackc/pgx/v5"
)
-func GetModuleNameById_tx(tx pgx.Tx, id uuid.UUID) (string, error) {
+func GetModuleNameById_tx(ctx context.Context, tx pgx.Tx, id uuid.UUID) (string, error) {
var name string
- if err := tx.QueryRow(db.Ctx, `
+ if err := tx.QueryRow(ctx, `
SELECT name
FROM app.module
WHERE id = $1
@@ -19,10 +19,10 @@ func GetModuleNameById_tx(tx pgx.Tx, id uuid.UUID) (string, error) {
}
return name, nil
}
-func GetModuleDetailsByRelationId_tx(tx pgx.Tx, id uuid.UUID) (uuid.UUID, string, error) {
+func GetModuleDetailsByRelationId_tx(ctx context.Context, tx pgx.Tx, id uuid.UUID) (uuid.UUID, string, error) {
var moduleId uuid.UUID
var name string
- if err := tx.QueryRow(db.Ctx, `
+ if err := tx.QueryRow(ctx, `
SELECT id, name
FROM app.module
WHERE id = (
@@ -37,9 +37,9 @@ func GetModuleDetailsByRelationId_tx(tx pgx.Tx, id uuid.UUID) (uuid.UUID, string
}
// returns module and relation names for given relation ID
-func GetRelationNamesById_tx(tx pgx.Tx, id uuid.UUID) (string, string, error) {
+func GetRelationNamesById_tx(ctx context.Context, tx pgx.Tx, id uuid.UUID) (string, string, error) {
var moduleName, name string
- if err := tx.QueryRow(db.Ctx, `
+ if err := tx.QueryRow(ctx, `
SELECT r.name, m.name
FROM app.relation AS r
INNER JOIN app.module AS m ON m.id = r.module_id
@@ -49,10 +49,10 @@ func GetRelationNamesById_tx(tx pgx.Tx, id uuid.UUID) (string, string, error) {
}
return moduleName, name, nil
}
-func GetRelationDetailsById_tx(tx pgx.Tx, id uuid.UUID) (string, bool, error) {
+func GetRelationDetailsById_tx(ctx context.Context, tx pgx.Tx, id uuid.UUID) (string, bool, error) {
var name string
var encryption bool
- if err := tx.QueryRow(db.Ctx, `
+ if err := tx.QueryRow(ctx, `
SELECT name, encryption
FROM app.relation
WHERE id = $1
@@ -63,11 +63,11 @@ func GetRelationDetailsById_tx(tx pgx.Tx, id uuid.UUID) (string, bool, error) {
}
// returns module, relation and attribute names as well as attribute content for given attribute ID
-func GetAttributeDetailsById_tx(tx pgx.Tx, id uuid.UUID) (string,
+func GetAttributeDetailsById_tx(ctx context.Context, tx pgx.Tx, id uuid.UUID) (string,
string, string, string, error) {
var moduleName, relationName, name, content string
- if err := tx.QueryRow(db.Ctx, `
+ if err := tx.QueryRow(ctx, `
SELECT m.name, r.name, a.name, a.content
FROM app.attribute AS a
INNER JOIN app.relation AS r ON r.id = a.relation_id
@@ -78,9 +78,9 @@ func GetAttributeDetailsById_tx(tx pgx.Tx, id uuid.UUID) (string,
}
return moduleName, relationName, name, content, nil
}
-func GetAttributeNameById_tx(tx pgx.Tx, id uuid.UUID) (string, error) {
+func GetAttributeNameById_tx(ctx context.Context, tx pgx.Tx, id uuid.UUID) (string, error) {
var name string
- if err := tx.QueryRow(db.Ctx, `
+ if err := tx.QueryRow(ctx, `
SELECT name
FROM app.attribute
WHERE id = $1
@@ -89,9 +89,9 @@ func GetAttributeNameById_tx(tx pgx.Tx, id uuid.UUID) (string, error) {
}
return name, nil
}
-func GetAttributeContentByRelationPk_tx(tx pgx.Tx, relationId uuid.UUID) (string, error) {
+func GetAttributeContentByRelationPk_tx(ctx context.Context, tx pgx.Tx, relationId uuid.UUID) (string, error) {
var content string
- if err := tx.QueryRow(db.Ctx, `
+ if err := tx.QueryRow(ctx, `
SELECT content
FROM app.attribute
WHERE relation_id = $1
@@ -103,9 +103,9 @@ func GetAttributeContentByRelationPk_tx(tx pgx.Tx, relationId uuid.UUID) (string
return content, nil
}
-func GetFormNameById_tx(tx pgx.Tx, id uuid.UUID) (string, error) {
+func GetFormNameById_tx(ctx context.Context, tx pgx.Tx, id uuid.UUID) (string, error) {
var name string
- if err := tx.QueryRow(db.Ctx, `
+ if err := tx.QueryRow(ctx, `
SELECT name
FROM app.form
WHERE id = $1
@@ -116,9 +116,9 @@ func GetFormNameById_tx(tx pgx.Tx, id uuid.UUID) (string, error) {
}
// returns module and PG function names+arguments for given PG function ID
-func GetPgFunctionNameById_tx(tx pgx.Tx, id uuid.UUID) (string, error) {
+func GetPgFunctionNameById_tx(ctx context.Context, tx pgx.Tx, id uuid.UUID) (string, error) {
var name string
- if err := tx.QueryRow(db.Ctx, `
+ if err := tx.QueryRow(ctx, `
SELECT name
FROM app.pg_function
WHERE id = $1
@@ -127,10 +127,10 @@ func GetPgFunctionNameById_tx(tx pgx.Tx, id uuid.UUID) (string, error) {
}
return name, nil
}
-func GetPgFunctionDetailsById_tx(tx pgx.Tx, id uuid.UUID) (string, string, string, bool, error) {
+func GetPgFunctionDetailsById_tx(ctx context.Context, tx pgx.Tx, id uuid.UUID) (string, string, string, bool, error) {
var moduleName, name, args string
var isTrigger bool
- if err := tx.QueryRow(db.Ctx, `
+ if err := tx.QueryRow(ctx, `
SELECT f.name, f.code_args, f.is_trigger, m.name
FROM app.pg_function AS f
INNER JOIN app.module AS m ON m.id = f.module_id
@@ -142,9 +142,9 @@ func GetPgFunctionDetailsById_tx(tx pgx.Tx, id uuid.UUID) (string, string, strin
}
// returns module and relation names for given PG trigger ID
-func GetPgTriggerNamesById_tx(tx pgx.Tx, id uuid.UUID) (string, string, error) {
+func GetPgTriggerNamesById_tx(ctx context.Context, tx pgx.Tx, id uuid.UUID) (string, string, error) {
var moduleName, relationName string
- if err := tx.QueryRow(db.Ctx, `
+ if err := tx.QueryRow(ctx, `
SELECT r.name, m.name
FROM app.pg_trigger AS t
INNER JOIN app.relation AS r ON r.id = t.relation_id
@@ -157,9 +157,9 @@ func GetPgTriggerNamesById_tx(tx pgx.Tx, id uuid.UUID) (string, string, error) {
}
// returns module and relation names for given PG index ID
-func GetPgIndexNamesById_tx(tx pgx.Tx, id uuid.UUID) (string, string, error) {
+func GetPgIndexNamesById_tx(ctx context.Context, tx pgx.Tx, id uuid.UUID) (string, string, error) {
var moduleName, relationName string
- if err := tx.QueryRow(db.Ctx, `
+ if err := tx.QueryRow(ctx, `
SELECT r.name, m.name
FROM app.pg_index AS i
INNER JOIN app.relation AS r ON r.id = i.relation_id
@@ -170,3 +170,19 @@ func GetPgIndexNamesById_tx(tx pgx.Tx, id uuid.UUID) (string, string, error) {
}
return moduleName, relationName, nil
}
+
+func GetIsFormBound_tx(ctx context.Context, tx pgx.Tx, entity string, id uuid.UUID) (bool, error) {
+
+ if entity != "js_function" && entity != "variable" {
+ return false, fmt.Errorf("invalid entity '%s'", entity)
+ }
+
+ isFormBound := false
+ err := tx.QueryRow(ctx, fmt.Sprintf(`
+ SELECT form_id IS NOT NULL
+ FROM app.%s
+ WHERE id = $1
+ `, entity), id).Scan(&isFormBound)
+
+ return isFormBound, err
+}
diff --git a/schema/menu/menu.go b/schema/menu/menu.go
deleted file mode 100644
index 158be11a..00000000
--- a/schema/menu/menu.go
+++ /dev/null
@@ -1,143 +0,0 @@
-package menu
-
-import (
- "fmt"
- "r3/db"
- "r3/schema"
- "r3/schema/caption"
- "r3/schema/collection/consumer"
- "r3/types"
-
- "github.com/gofrs/uuid"
- "github.com/jackc/pgx/v5"
- "github.com/jackc/pgx/v5/pgtype"
-)
-
-func Copy_tx(tx pgx.Tx, moduleId uuid.UUID, moduleIdNew uuid.UUID) error {
-
- menus, err := Get(moduleId, pgtype.UUID{})
- if err != nil {
- return err
- }
-
- // reset entity IDs
- menus = NilIds(menus, moduleIdNew)
-
- return Set_tx(tx, pgtype.UUID{}, menus)
-}
-
-func Del_tx(tx pgx.Tx, id uuid.UUID) error {
- _, err := tx.Exec(db.Ctx, `DELETE FROM app.menu WHERE id = $1`, id)
- return err
-}
-
-func Get(moduleId uuid.UUID, parentId pgtype.UUID) ([]types.Menu, error) {
-
- menus := make([]types.Menu, 0)
-
- nullCheck := "AND (parent_id IS NULL OR parent_id = $2)"
- if parentId.Valid {
- nullCheck = "AND parent_id = $2"
- }
-
- rows, err := db.Pool.Query(db.Ctx, fmt.Sprintf(`
- SELECT id, form_id, icon_id, show_children
- FROM app.menu
- WHERE module_id = $1
- %s
- ORDER BY position ASC
- `, nullCheck), moduleId, parentId)
- if err != nil {
- return menus, err
- }
-
- for rows.Next() {
- var m types.Menu
-
- if err := rows.Scan(&m.Id, &m.FormId, &m.IconId, &m.ShowChildren); err != nil {
- return menus, err
- }
- m.ModuleId = moduleId
- menus = append(menus, m)
- }
- rows.Close()
-
- for i, m := range menus {
-
- // get children & collections & captions
- m.Menus, err = Get(moduleId, pgtype.UUID{Bytes: m.Id, Valid: true})
- if err != nil {
- return menus, err
- }
- m.Collections, err = consumer.Get("menu", m.Id, "menuDisplay")
- if err != nil {
- return menus, err
- }
- m.Captions, err = caption.Get("menu", m.Id, []string{"menuTitle"})
- if err != nil {
- return menus, err
- }
- menus[i] = m
- }
- return menus, nil
-}
-
-func Set_tx(tx pgx.Tx, parentId pgtype.UUID, menus []types.Menu) error {
-
- for i, m := range menus {
- known, err := schema.CheckCreateId_tx(tx, &m.Id, "menu", "id")
- if err != nil {
- return err
- }
-
- if known {
- if _, err := tx.Exec(db.Ctx, `
- UPDATE app.menu
- SET parent_id = $1, form_id = $2, icon_id = $3, position = $4,
- show_children = $5
- WHERE id = $6
- `, parentId, m.FormId, m.IconId, i, m.ShowChildren, m.Id); err != nil {
- return err
- }
- } else {
- if _, err := tx.Exec(db.Ctx, `
- INSERT INTO app.menu (id, module_id, parent_id, form_id,
- icon_id, position, show_children)
- VALUES ($1,$2,$3,$4,$5,$6,$7)
- `, m.Id, m.ModuleId, parentId, m.FormId, m.IconId, i, m.ShowChildren); err != nil {
- return err
- }
- }
-
- // set children
- if err := Set_tx(tx, pgtype.UUID{Bytes: m.Id, Valid: true}, m.Menus); err != nil {
- return err
- }
-
- // set collections
- if err := consumer.Set_tx(tx, "menu", m.Id, "menuDisplay", m.Collections); err != nil {
- return err
- }
-
- // set captions
- if err := caption.Set_tx(tx, m.Id, m.Captions); err != nil {
- return err
- }
- }
- return nil
-}
-
-// nil menu IDs and set new module
-func NilIds(menus []types.Menu, moduleIdNew uuid.UUID) []types.Menu {
-
- for i, _ := range menus {
- menus[i].Id = uuid.Nil
- menus[i].ModuleId = moduleIdNew
-
- for j, _ := range menus[i].Collections {
- menus[i].Collections[j].Id = uuid.Nil
- }
- menus[i].Menus = NilIds(menus[i].Menus, moduleIdNew)
- }
- return menus
-}
diff --git a/schema/menuTab/menuTab.go b/schema/menuTab/menuTab.go
new file mode 100644
index 00000000..88580678
--- /dev/null
+++ b/schema/menuTab/menuTab.go
@@ -0,0 +1,211 @@
+package menuTab
+
+import (
+ "context"
+ "fmt"
+ "r3/schema"
+ "r3/schema/caption"
+ "r3/schema/collection/consumer"
+ "r3/types"
+
+ "github.com/gofrs/uuid"
+ "github.com/jackc/pgx/v5"
+ "github.com/jackc/pgx/v5/pgtype"
+)
+
+func Del_tx(ctx context.Context, tx pgx.Tx, id uuid.UUID) error {
+ _, err := tx.Exec(ctx, `
+ DELETE FROM app.menu_tab
+ WHERE id = $1
+ `, id)
+ return err
+}
+
+func Get_tx(ctx context.Context, tx pgx.Tx, moduleId uuid.UUID) ([]types.MenuTab, error) {
+ menuTabs := make([]types.MenuTab, 0)
+
+ rows, err := tx.Query(ctx, `
+ SELECT id, icon_id
+ FROM app.menu_tab
+ WHERE module_id = $1
+ ORDER BY position ASC
+ `, moduleId)
+ if err != nil {
+ return menuTabs, err
+ }
+ defer rows.Close()
+
+ for rows.Next() {
+ var mt types.MenuTab
+ if err := rows.Scan(&mt.Id, &mt.IconId); err != nil {
+ return menuTabs, err
+ }
+ mt.ModuleId = moduleId
+ menuTabs = append(menuTabs, mt)
+ }
+
+ // get menus and captions
+ for i, mt := range menuTabs {
+
+ mt.Menus, err = getMenus_tx(ctx, tx, mt.Id, pgtype.UUID{})
+ if err != nil {
+ return menuTabs, err
+ }
+
+ mt.Captions, err = caption.Get_tx(ctx, tx, "menu_tab", mt.Id, []string{"menuTabTitle"})
+ if err != nil {
+ return menuTabs, err
+ }
+ menuTabs[i] = mt
+ }
+ return menuTabs, nil
+}
+
+func Set_tx(ctx context.Context, tx pgx.Tx, position int, mt types.MenuTab) error {
+
+ known, err := schema.CheckCreateId_tx(ctx, tx, &mt.Id, "menu_tab", "id")
+ if err != nil {
+ return err
+ }
+
+ if known {
+ if _, err := tx.Exec(ctx, `
+ UPDATE app.menu_tab
+ SET icon_id = $1, position = $2
+ WHERE id = $3
+ `, mt.IconId, position, mt.Id); err != nil {
+ return err
+ }
+ } else {
+ if _, err := tx.Exec(ctx, `
+ INSERT INTO app.menu_tab (id, module_id, icon_id, position)
+ VALUES ($1,$2,$3,$4)
+ `, mt.Id, mt.ModuleId, mt.IconId, position); err != nil {
+ return err
+ }
+ }
+
+ // set menus
+ if err := setMenus_tx(ctx, tx, mt.Id, pgtype.UUID{}, mt.Menus); err != nil {
+ return err
+ }
+
+ // set captions
+ return caption.Set_tx(ctx, tx, mt.Id, mt.Captions)
+}
+
+// menus
+func getMenus_tx(ctx context.Context, tx pgx.Tx, menuTabId uuid.UUID, parentId pgtype.UUID) ([]types.Menu, error) {
+
+ menus := make([]types.Menu, 0)
+
+ nullCheck := "AND (parent_id IS NULL OR parent_id = $2)"
+ if parentId.Valid {
+ nullCheck = "AND parent_id = $2"
+ }
+
+ rows, err := tx.Query(ctx, fmt.Sprintf(`
+ SELECT id, form_id, icon_id, show_children, color
+ FROM app.menu
+ WHERE menu_tab_id = $1
+ %s
+ ORDER BY position ASC
+ `, nullCheck), menuTabId, parentId)
+ if err != nil {
+ return menus, err
+ }
+ defer rows.Close()
+
+ for rows.Next() {
+ var m types.Menu
+
+ if err := rows.Scan(&m.Id, &m.FormId, &m.IconId, &m.ShowChildren, &m.Color); err != nil {
+ return menus, err
+ }
+ menus = append(menus, m)
+ }
+
+ // get children & collections & captions
+ for i, m := range menus {
+ m.Menus, err = getMenus_tx(ctx, tx, menuTabId, pgtype.UUID{Bytes: m.Id, Valid: true})
+ if err != nil {
+ return menus, err
+ }
+ m.Collections, err = consumer.Get_tx(ctx, tx, "menu", m.Id, "menuDisplay")
+ if err != nil {
+ return menus, err
+ }
+ m.Captions, err = caption.Get_tx(ctx, tx, "menu", m.Id, []string{"menuTitle"})
+ if err != nil {
+ return menus, err
+ }
+ menus[i] = m
+ }
+ return menus, nil
+}
+func setMenus_tx(ctx context.Context, tx pgx.Tx, menuTabId uuid.UUID, parentId pgtype.UUID, menus []types.Menu) error {
+
+ idsKeep := make([]uuid.UUID, 0)
+ for i, m := range menus {
+ known, err := schema.CheckCreateId_tx(ctx, tx, &m.Id, "menu", "id")
+ if err != nil {
+ return err
+ }
+ idsKeep = append(idsKeep, m.Id)
+
+ if known {
+ if _, err := tx.Exec(ctx, `
+ UPDATE app.menu
+ SET menu_tab_id = $1, parent_id = $2, form_id = $3, icon_id = $4,
+ position = $5, show_children = $6, color = $7
+ WHERE id = $8
+ `, menuTabId, parentId, m.FormId, m.IconId, i, m.ShowChildren, m.Color, m.Id); err != nil {
+ return err
+ }
+ } else {
+ if _, err := tx.Exec(ctx, `
+ INSERT INTO app.menu (id, menu_tab_id, parent_id,
+ form_id, icon_id, position, show_children, color)
+ VALUES ($1,$2,$3,$4,$5,$6,$7,$8)
+ `, m.Id, menuTabId, parentId, m.FormId, m.IconId, i, m.ShowChildren, m.Color); err != nil {
+ return err
+ }
+ }
+
+ // set children
+ if err := setMenus_tx(ctx, tx, menuTabId, pgtype.UUID{Bytes: m.Id, Valid: true}, m.Menus); err != nil {
+ return err
+ }
+
+ // set collections
+ if err := consumer.Set_tx(ctx, tx, "menu", m.Id, "menuDisplay", m.Collections); err != nil {
+ return err
+ }
+
+ // set captions
+ if err := caption.Set_tx(ctx, tx, m.Id, m.Captions); err != nil {
+ return err
+ }
+ }
+
+ if parentId.Valid {
+ if _, err := tx.Exec(ctx, `
+ DELETE FROM app.menu
+ WHERE menu_tab_id = $1
+ AND parent_id = $2
+ AND id <> ALL($3)
+ `, menuTabId, parentId.Bytes, idsKeep); err != nil {
+ return err
+ }
+ } else {
+ if _, err := tx.Exec(ctx, `
+ DELETE FROM app.menu
+ WHERE menu_tab_id = $1
+ AND parent_id IS NULL
+ AND id <> ALL($2)
+ `, menuTabId, idsKeep); err != nil {
+ return err
+ }
+ }
+ return nil
+}
diff --git a/schema/module/module.go b/schema/module/module.go
index 0b730454..c0fb29a5 100644
--- a/schema/module/module.go
+++ b/schema/module/module.go
@@ -1,36 +1,35 @@
package module
import (
+ "context"
"errors"
"fmt"
- "r3/compatible"
- "r3/db"
+ "r3/config/module_meta"
"r3/db/check"
- "r3/module_option"
"r3/schema"
"r3/schema/article"
"r3/schema/attribute"
"r3/schema/caption"
+ "r3/schema/compatible"
"r3/schema/pgFunction"
- "r3/tools"
"r3/types"
+ "slices"
"strings"
"github.com/gofrs/uuid"
- "github.com/jackc/pgx/v5/pgtype"
"github.com/jackc/pgx/v5"
)
-func Del_tx(tx pgx.Tx, id uuid.UUID) error {
+func Del_tx(ctx context.Context, tx pgx.Tx, id uuid.UUID) error {
- name, err := schema.GetModuleNameById_tx(tx, id)
+ name, err := schema.GetModuleNameById_tx(ctx, tx, id)
if err != nil {
return err
}
// drop e2ee data key relations for module relations with encryption
relIdsEncrypted := make([]uuid.UUID, 0)
- if err := tx.QueryRow(db.Ctx, `
+ if err := tx.QueryRow(ctx, `
SELECT ARRAY_AGG(id)
FROM app.relation
WHERE module_id = $1
@@ -40,7 +39,7 @@ func Del_tx(tx pgx.Tx, id uuid.UUID) error {
}
for _, relId := range relIdsEncrypted {
- if _, err := tx.Exec(db.Ctx, fmt.Sprintf(`
+ if _, err := tx.Exec(ctx, fmt.Sprintf(`
DROP TABLE IF EXISTS instance_e2ee."%s"
`, schema.GetEncKeyTableName(relId))); err != nil {
return err
@@ -49,7 +48,7 @@ func Del_tx(tx pgx.Tx, id uuid.UUID) error {
// drop file attribute relations
atrIdsFile := make([]uuid.UUID, 0)
- if err := db.Pool.QueryRow(db.Ctx, `
+ if err := tx.QueryRow(ctx, `
SELECT ARRAY_AGG(id)
FROM app.attribute
WHERE relation_id IN (
@@ -63,38 +62,30 @@ func Del_tx(tx pgx.Tx, id uuid.UUID) error {
}
for _, atrId := range atrIdsFile {
- if err := attribute.FileRelationsDelete_tx(tx, atrId); err != nil {
+ if err := attribute.FileRelationsDelete_tx(ctx, tx, atrId); err != nil {
return err
}
}
// drop module schema
- if _, err := tx.Exec(db.Ctx, fmt.Sprintf(`DROP SCHEMA "%s" CASCADE`,
+ if _, err := tx.Exec(ctx, fmt.Sprintf(`DROP SCHEMA "%s" CASCADE`,
name)); err != nil {
return err
}
// delete module reference
- _, err = tx.Exec(db.Ctx, `DELETE FROM app.module WHERE id = $1`, id)
+ _, err = tx.Exec(ctx, `DELETE FROM app.module WHERE id = $1`, id)
return err
}
-func Get(ids []uuid.UUID) ([]types.Module, error) {
-
+func Get_tx(ctx context.Context, tx pgx.Tx, ids []uuid.UUID) ([]types.Module, error) {
modules := make([]types.Module, 0)
- sqlWheres := []string{}
- sqlValues := []interface{}{}
-
- // filter to specified module IDs
- if len(ids) != 0 {
- sqlWheres = append(sqlWheres, fmt.Sprintf("WHERE id = ANY($%d)", len(sqlValues)+1))
- sqlValues = append(sqlValues, ids)
- }
- rows, err := db.Pool.Query(db.Ctx, fmt.Sprintf(`
- SELECT id, parent_id, form_id, icon_id, name, color1, position,
- language_main, release_build, release_build_app, release_date,
+ rows, err := tx.Query(ctx, `
+ SELECT id, parent_id, form_id, icon_id, icon_id_pwa1, icon_id_pwa2,
+ js_function_id_on_login, pg_function_id_login_sync, name, name_pwa, name_pwa_short,
+ color1, position, language_main, release_build, release_build_app, release_date,
ARRAY(
SELECT module_id_on
FROM app.module_depends
@@ -114,44 +105,35 @@ func Get(ids []uuid.UUID) ([]types.Module, error) {
ORDER BY language_code ASC
) AS "languages"
FROM app.module AS m
- %s
- ORDER BY
- CASE
- WHEN parent_id IS NULL THEN name
- ELSE CONCAT((
- SELECT name
- FROM app.module
- WHERE id = m.parent_id
- ),'_',name)
- END
- `, strings.Join(sqlWheres, "\n")), sqlValues...)
+ WHERE id = ANY($1)
+ `, ids)
if err != nil {
return modules, err
}
+ defer rows.Close()
for rows.Next() {
var m types.Module
- if err := rows.Scan(&m.Id, &m.ParentId, &m.FormId, &m.IconId, &m.Name,
- &m.Color1, &m.Position, &m.LanguageMain, &m.ReleaseBuild,
- &m.ReleaseBuildApp, &m.ReleaseDate, &m.DependsOn, &m.ArticleIdsHelp,
- &m.Languages); err != nil {
+ if err := rows.Scan(&m.Id, &m.ParentId, &m.FormId, &m.IconId, &m.IconIdPwa1,
+ &m.IconIdPwa2, &m.JsFunctionIdOnLogin, &m.PgFunctionIdLoginSync, &m.Name,
+ &m.NamePwa, &m.NamePwaShort, &m.Color1, &m.Position, &m.LanguageMain,
+ &m.ReleaseBuild, &m.ReleaseBuildApp, &m.ReleaseDate, &m.DependsOn,
+ &m.ArticleIdsHelp, &m.Languages); err != nil {
- rows.Close()
return modules, err
}
modules = append(modules, m)
}
- rows.Close()
// get start forms & captions
for i, mod := range modules {
- mod.StartForms, err = getStartForms(mod.Id)
+ mod.StartForms, err = getStartForms_tx(ctx, tx, mod.Id)
if err != nil {
return modules, err
}
- mod.Captions, err = caption.Get("module", mod.Id, []string{"moduleTitle"})
+ mod.Captions, err = caption.Get_tx(ctx, tx, "module", mod.Id, []string{"moduleTitle"})
if err != nil {
return modules, err
}
@@ -160,196 +142,222 @@ func Get(ids []uuid.UUID) ([]types.Module, error) {
return modules, nil
}
-func Set_tx(tx pgx.Tx, id uuid.UUID, parentId pgtype.UUID,
- formId pgtype.UUID, iconId pgtype.UUID, name string, color1 string,
- position int, languageMain string, releaseBuild int, releaseBuildApp int,
- releaseDate int64, dependsOn []uuid.UUID, startForms []types.ModuleStartForm,
- languages []string, articleIdsHelp []uuid.UUID, captions types.CaptionMap) error {
+func Set_tx(ctx context.Context, tx pgx.Tx, mod types.Module) error {
+ _, err := SetReturnId_tx(ctx, tx, mod)
+ return err
+}
+func SetReturnId_tx(ctx context.Context, tx pgx.Tx, mod types.Module) (uuid.UUID, error) {
- if err := check.DbIdentifier(name); err != nil {
- return err
+ if err := check.DbIdentifier(mod.Name); err != nil {
+ return mod.Id, err
}
- if len(languageMain) != 5 {
- return errors.New("language code must have 5 characters")
+ if len(mod.LanguageMain) != 5 {
+ return mod.Id, errors.New("language code must have 5 characters")
}
- create := id == uuid.Nil
- known, err := schema.CheckCreateId_tx(tx, &id, "module", "id")
+ create := mod.Id == uuid.Nil
+ known, err := schema.CheckCreateId_tx(ctx, tx, &mod.Id, "module", "id")
if err != nil {
- return err
+ return mod.Id, err
+ }
+
+ if strings.HasPrefix(mod.Name, "instance") {
+ return mod.Id, fmt.Errorf("application name must not start with 'instance'")
}
if known {
var nameEx string
- if err := tx.QueryRow(db.Ctx, `
+ if err := tx.QueryRow(ctx, `
SELECT name
FROM app.module
WHERE id = $1
- `, id).Scan(&nameEx); err != nil {
- return err
+ `, mod.Id).Scan(&nameEx); err != nil {
+ return mod.Id, err
}
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
UPDATE app.module SET parent_id = $1, form_id = $2, icon_id = $3,
- name = $4, color1 = $5, position = $6, language_main = $7,
- release_build = $8, release_build_app = $9, release_date = $10
- WHERE id = $11
- `, parentId, formId, iconId, name, color1, position, languageMain,
- releaseBuild, releaseBuildApp, releaseDate, id); err != nil {
-
- return err
+ icon_id_pwa1 = $4, icon_id_pwa2 = $5, js_function_id_on_login = $6,
+ pg_function_id_login_sync = $7, name = $8, name_pwa = $9, name_pwa_short = $10,
+ color1 = $11, position = $12, language_main = $13, release_build = $14,
+ release_build_app = $15, release_date = $16
+ WHERE id = $17
+ `, mod.ParentId, mod.FormId, mod.IconId, mod.IconIdPwa1, mod.IconIdPwa2,
+ mod.JsFunctionIdOnLogin, mod.PgFunctionIdLoginSync, mod.Name, mod.NamePwa,
+ mod.NamePwaShort, mod.Color1, mod.Position, mod.LanguageMain, mod.ReleaseBuild,
+ mod.ReleaseBuildApp, mod.ReleaseDate, mod.Id); err != nil {
+
+ return mod.Id, err
}
- if name != nameEx {
- if _, err := tx.Exec(db.Ctx, fmt.Sprintf(`ALTER SCHEMA "%s" RENAME TO "%s"`,
- nameEx, name)); err != nil {
+ if mod.Name != nameEx {
+ if _, err := tx.Exec(ctx, fmt.Sprintf(`ALTER SCHEMA "%s" RENAME TO "%s"`,
+ nameEx, mod.Name)); err != nil {
- return err
+ return mod.Id, err
}
- if err := pgFunction.RecreateAffectedBy_tx(tx, "module", id); err != nil {
- return fmt.Errorf("failed to recreate affected PG functions, %s", err)
+ if err := pgFunction.RecreateAffectedBy_tx(ctx, tx, "module", mod.Id); err != nil {
+ return mod.Id, fmt.Errorf("failed to recreate affected PG functions, %s", err)
}
}
} else {
- if _, err := tx.Exec(db.Ctx, fmt.Sprintf(`CREATE SCHEMA "%s"`, name)); err != nil {
- return err
+ if _, err := tx.Exec(ctx, fmt.Sprintf(`CREATE SCHEMA "%s"`, mod.Name)); err != nil {
+ return mod.Id, err
}
// insert module reference
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
INSERT INTO app.module (
- id, parent_id, form_id, icon_id, name, color1, position,
- language_main, release_build, release_build_app, release_date
+ id, parent_id, form_id, icon_id, icon_id_pwa1, icon_id_pwa2,
+ js_function_id_on_login, pg_function_id_login_sync, name, name_pwa,
+ name_pwa_short, color1, position, language_main, release_build,
+ release_build_app, release_date
)
- VALUES ($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11)
- `, id, parentId, formId, iconId, name, color1, position,
- languageMain, releaseBuild, releaseBuildApp, releaseDate); err != nil {
+ VALUES ($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13,$14,$15,$16,$17)
+ `, mod.Id, mod.ParentId, mod.FormId, mod.IconId, mod.IconIdPwa1, mod.IconIdPwa2,
+ mod.JsFunctionIdOnLogin, mod.PgFunctionIdLoginSync, mod.Name, mod.NamePwa,
+ mod.NamePwaShort, mod.Color1, mod.Position, mod.LanguageMain,
+ mod.ReleaseBuild, mod.ReleaseBuildApp, mod.ReleaseDate); err != nil {
- return err
+ return mod.Id, err
}
if create {
- // insert default 'everyone' role for module
- // only relevant if module did not exist before
- // otherwise everyone role with ID (and possible assignments) already exists
+ // generate entities that need to be created if module did not exist before
+ // otherwise they are imported with existing IDs (and foreign key references)
+
+ // generate default 'everyone' role for module
roleId, err := uuid.NewV4()
if err != nil {
- return err
+ return mod.Id, err
}
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
INSERT INTO app.role (id, module_id, name, content, assignable)
VALUES ($1,$2,'everyone','everyone',false)
- `, roleId, id); err != nil {
- return err
+ `, roleId, mod.Id); err != nil {
+ return mod.Id, err
+ }
+
+ // generate first menu tab
+ menuTabId, err := uuid.NewV4()
+ if err != nil {
+ return mod.Id, err
+ }
+
+ if _, err := tx.Exec(ctx, `
+ INSERT INTO app.menu_tab (id, module_id, position)
+ VALUES ($1,$2,0)
+ `, menuTabId, mod.Id); err != nil {
+ return mod.Id, err
}
}
- // insert module options for this instance
- if err := module_option.Set_tx(tx, id, false, create, position); err != nil {
- return err
+ // create module meta data record for instance
+ if err := module_meta.Create_tx(ctx, tx, mod.Id, false, create, mod.Position); err != nil {
+ return mod.Id, err
}
}
// set dependencies to other modules
- dependsOnCurrent, err := getDependsOn_tx(tx, id)
+ dependsOnCurrent, err := getDependsOn_tx(ctx, tx, mod.Id)
if err != nil {
- return err
+ return mod.Id, err
}
for _, moduleIdOn := range dependsOnCurrent {
- if tools.UuidInSlice(moduleIdOn, dependsOn) {
+ if slices.Contains(mod.DependsOn, moduleIdOn) {
continue
}
// existing dependency has been removed
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
DELETE FROM app.module_depends
WHERE module_id = $1
AND module_id_on = $2
- `, id, moduleIdOn); err != nil {
- return err
+ `, mod.Id, moduleIdOn); err != nil {
+ return mod.Id, err
}
}
- for _, moduleIdOn := range dependsOn {
+ for _, moduleIdOn := range mod.DependsOn {
- if tools.UuidInSlice(moduleIdOn, dependsOnCurrent) {
+ if slices.Contains(dependsOnCurrent, moduleIdOn) {
continue
}
// new dependency has been added
- if id == moduleIdOn {
- return errors.New("module dependency to itself is not allowed")
+ if mod.Id == moduleIdOn {
+ return mod.Id, errors.New("module dependency to itself is not allowed")
}
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
INSERT INTO app.module_depends (module_id, module_id_on)
VALUES ($1,$2)
- `, id, moduleIdOn); err != nil {
- return err
+ `, mod.Id, moduleIdOn); err != nil {
+ return mod.Id, err
}
}
// set start forms
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
DELETE FROM app.module_start_form
WHERE module_id = $1
- `, id); err != nil {
- return err
+ `, mod.Id); err != nil {
+ return mod.Id, err
}
- for i, sf := range startForms {
- if _, err := tx.Exec(db.Ctx, `
+ for i, sf := range mod.StartForms {
+ if _, err := tx.Exec(ctx, `
INSERT INTO app.module_start_form (module_id, position, role_id, form_id)
VALUES ($1,$2,$3,$4)
- `, id, i, sf.RoleId, sf.FormId); err != nil {
- return err
+ `, mod.Id, i, sf.RoleId, sf.FormId); err != nil {
+ return mod.Id, err
}
}
// set languages
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
DELETE FROM app.module_language
WHERE module_id = $1
- `, id); err != nil {
- return err
+ `, mod.Id); err != nil {
+ return mod.Id, err
}
- for _, code := range languages {
+ for _, code := range mod.Languages {
if len(code) != 5 {
- return errors.New("language code must have 5 characters")
+ return mod.Id, errors.New("language code must have 5 characters")
}
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
INSERT INTO app.module_language (module_id, language_code)
VALUES ($1,$2)
- `, id, code); err != nil {
- return err
+ `, mod.Id, code); err != nil {
+ return mod.Id, err
}
}
// set help articles
- if err := article.Assign_tx(tx, "module", id, articleIdsHelp); err != nil {
- return err
+ if err := article.Assign_tx(ctx, tx, "module", mod.Id, mod.ArticleIdsHelp); err != nil {
+ return mod.Id, err
}
// set captions
// fix imports < 3.2: Migration from help captions to help articles
- captions, err = compatible.FixCaptions_tx(tx, "module", id, captions)
+ mod.Captions, err = compatible.FixCaptions_tx(ctx, tx, "module", mod.Id, mod.Captions)
if err != nil {
- return err
+ return mod.Id, err
}
- return caption.Set_tx(tx, id, captions)
+ return mod.Id, caption.Set_tx(ctx, tx, mod.Id, mod.Captions)
}
-func getStartForms(id uuid.UUID) ([]types.ModuleStartForm, error) {
+func getStartForms_tx(ctx context.Context, tx pgx.Tx, id uuid.UUID) ([]types.ModuleStartForm, error) {
startForms := make([]types.ModuleStartForm, 0)
- rows, err := db.Pool.Query(db.Ctx, `
+ rows, err := tx.Query(ctx, `
SELECT role_id, form_id
FROM app.module_start_form
WHERE module_id = $1
@@ -366,15 +374,14 @@ func getStartForms(id uuid.UUID) ([]types.ModuleStartForm, error) {
return startForms, err
}
startForms = append(startForms, sf)
-
}
return startForms, nil
}
-func getDependsOn_tx(tx pgx.Tx, id uuid.UUID) ([]uuid.UUID, error) {
+func getDependsOn_tx(ctx context.Context, tx pgx.Tx, id uuid.UUID) ([]uuid.UUID, error) {
moduleIdsDependsOn := make([]uuid.UUID, 0)
- rows, err := tx.Query(db.Ctx, `
+ rows, err := tx.Query(ctx, `
SELECT module_id_on
FROM app.module_depends
WHERE module_id = $1
@@ -390,7 +397,6 @@ func getDependsOn_tx(tx pgx.Tx, id uuid.UUID) ([]uuid.UUID, error) {
return moduleIdsDependsOn, err
}
moduleIdsDependsOn = append(moduleIdsDependsOn, moduleIdDependsOn)
-
}
return moduleIdsDependsOn, nil
}
diff --git a/schema/openForm/openForm.go b/schema/openForm/openForm.go
index 523dcf48..9a4c889e 100644
--- a/schema/openForm/openForm.go
+++ b/schema/openForm/openForm.go
@@ -1,49 +1,82 @@
package openForm
import (
+ "context"
"errors"
"fmt"
- "r3/db"
- "r3/tools"
+ "r3/schema/compatible"
"r3/types"
+ "slices"
"github.com/gofrs/uuid"
"github.com/jackc/pgx/v5"
+ "github.com/jackc/pgx/v5/pgtype"
)
var entitiesAllowed = []string{"column", "collection_consumer", "field"}
-func Get(entity string, id uuid.UUID) (f types.OpenForm, err error) {
+func Get_tx(ctx context.Context, tx pgx.Tx, entity string, id uuid.UUID, formContext pgtype.Text) (f types.OpenForm, err error) {
- if !tools.StringInSlice(entity, entitiesAllowed) {
+ if !slices.Contains(entitiesAllowed, entity) {
return f, errors.New("invalid open form entity")
}
- err = db.Pool.QueryRow(db.Ctx, fmt.Sprintf(`
- SELECT form_id_open, attribute_id_apply, relation_index,
- pop_up, max_height, max_width
+ sqlArgs := make([]interface{}, 0)
+ sqlArgs = append(sqlArgs, id)
+
+ sqlWhere := "AND context IS NULL"
+ if formContext.Valid {
+ sqlArgs = append(sqlArgs, formContext.String)
+ sqlWhere = "AND context = $2"
+ }
+
+ err = tx.QueryRow(ctx, fmt.Sprintf(`
+ SELECT form_id_open, relation_index_open, attribute_id_apply,
+ relation_index_apply, pop_up_type, max_height, max_width
FROM app.open_form
WHERE %s_id = $1
- `, entity), id).Scan(&f.FormIdOpen, &f.AttributeIdApply,
- &f.RelationIndex, &f.PopUp, &f.MaxHeight, &f.MaxWidth)
+ %s
+ `, entity, sqlWhere), sqlArgs...).Scan(&f.FormIdOpen, &f.RelationIndexOpen,
+ &f.AttributeIdApply, &f.RelationIndexApply, &f.PopUpType, &f.MaxHeight,
+ &f.MaxWidth)
// open form is optional
if err == pgx.ErrNoRows {
return f, nil
}
+
+ // fix exports > 3.4: Set default value for legacy relation index
+ f = compatible.FixOpenFormRelationIndexApplyDefault(f)
+
return f, err
}
-func Set_tx(tx pgx.Tx, entity string, id uuid.UUID, f types.OpenForm) error {
+func Set_tx(ctx context.Context, tx pgx.Tx, entity string, id uuid.UUID, f types.OpenForm, context pgtype.Text) error {
- if !tools.StringInSlice(entity, entitiesAllowed) {
+ if !slices.Contains(entitiesAllowed, entity) {
return errors.New("invalid open form entity")
}
- if _, err := tx.Exec(db.Ctx, fmt.Sprintf(`
+ // fix imports < 3.4: Legacy pop-up option
+ f = compatible.FixOpenFormPopUpType(f)
+
+ // fix imports < 3.5: Relation index for applying record relationship value
+ f = compatible.FixOpenFormRelationIndexApply(f)
+
+ sqlArgs := make([]interface{}, 0)
+ sqlArgs = append(sqlArgs, id)
+
+ sqlWhere := "AND context IS NULL"
+ if context.Valid {
+ sqlArgs = append(sqlArgs, context.String)
+ sqlWhere = "AND context = $2"
+ }
+
+ if _, err := tx.Exec(ctx, fmt.Sprintf(`
DELETE FROM app.open_form
WHERE %s_id = $1
- `, entity), id); err != nil {
+ %s
+ `, entity, sqlWhere), sqlArgs...); err != nil {
return err
}
@@ -51,14 +84,14 @@ func Set_tx(tx pgx.Tx, entity string, id uuid.UUID, f types.OpenForm) error {
return nil
}
- _, err := tx.Exec(db.Ctx, fmt.Sprintf(`
+ _, err := tx.Exec(ctx, fmt.Sprintf(`
INSERT INTO app.open_form (
- %s_id, form_id_open, attribute_id_apply,
- relation_index, pop_up, max_height, max_width
+ %s_id, context, form_id_open, relation_index_open, attribute_id_apply,
+ relation_index_apply, pop_up_type, max_height, max_width
)
- VALUES ($1,$2,$3,$4,$5,$6,$7)
- `, entity), id, f.FormIdOpen, f.AttributeIdApply,
- f.RelationIndex, f.PopUp, f.MaxHeight, f.MaxWidth)
+ VALUES ($1,$2,$3,$4,$5,$6,$7,$8,$9)
+ `, entity), id, context, f.FormIdOpen, f.RelationIndexOpen, f.AttributeIdApply,
+ f.RelationIndexApply, f.PopUpType, f.MaxHeight, f.MaxWidth)
return err
}
diff --git a/schema/pgFunction/pgFunction.go b/schema/pgFunction/pgFunction.go
index 8dca0a31..30a681bb 100644
--- a/schema/pgFunction/pgFunction.go
+++ b/schema/pgFunction/pgFunction.go
@@ -1,35 +1,36 @@
package pgFunction
import (
+ "context"
"errors"
"fmt"
- "r3/db"
"r3/db/check"
"r3/schema"
"r3/schema/caption"
- "r3/tools"
+ "r3/schema/compatible"
"r3/types"
"regexp"
+ "slices"
"strings"
"github.com/gofrs/uuid"
"github.com/jackc/pgx/v5"
)
-func Del_tx(tx pgx.Tx, id uuid.UUID) error {
+func Del_tx(ctx context.Context, tx pgx.Tx, id uuid.UUID) error {
- nameMod, nameEx, _, _, err := schema.GetPgFunctionDetailsById_tx(tx, id)
+ nameMod, nameEx, _, _, err := schema.GetPgFunctionDetailsById_tx(ctx, tx, id)
if err != nil {
return err
}
- if _, err := tx.Exec(db.Ctx, fmt.Sprintf(`
+ if _, err := tx.Exec(ctx, fmt.Sprintf(`
DROP FUNCTION "%s"."%s"
`, nameMod, nameEx)); err != nil {
return err
}
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
DELETE FROM app.pg_function
WHERE id = $1
`, id); err != nil {
@@ -38,14 +39,14 @@ func Del_tx(tx pgx.Tx, id uuid.UUID) error {
return nil
}
-func Get(moduleId uuid.UUID) ([]types.PgFunction, error) {
+func Get_tx(ctx context.Context, tx pgx.Tx, moduleId uuid.UUID) ([]types.PgFunction, error) {
var err error
functions := make([]types.PgFunction, 0)
- rows, err := db.Pool.Query(db.Ctx, `
+ rows, err := tx.Query(ctx, `
SELECT id, name, code_args, code_function, code_returns,
- is_frontend_exec, is_trigger
+ is_frontend_exec, is_login_sync, is_trigger, volatility
FROM app.pg_function
WHERE module_id = $1
ORDER BY name ASC
@@ -53,26 +54,26 @@ func Get(moduleId uuid.UUID) ([]types.PgFunction, error) {
if err != nil {
return functions, err
}
+ defer rows.Close()
for rows.Next() {
var f types.PgFunction
- if err := rows.Scan(&f.Id, &f.Name, &f.CodeArgs, &f.CodeFunction,
- &f.CodeReturns, &f.IsFrontendExec, &f.IsTrigger); err != nil {
+ if err := rows.Scan(&f.Id, &f.Name, &f.CodeArgs, &f.CodeFunction, &f.CodeReturns,
+ &f.IsFrontendExec, &f.IsLoginSync, &f.IsTrigger, &f.Volatility); err != nil {
return functions, err
}
functions = append(functions, f)
}
- rows.Close()
for i, f := range functions {
f.ModuleId = moduleId
- f.Schedules, err = getSchedules(f.Id)
+ f.Schedules, err = getSchedules_tx(ctx, tx, f.Id)
if err != nil {
return functions, err
}
- f.Captions, err = caption.Get("pg_function", f.Id, []string{"pgFunctionTitle", "pgFunctionDesc"})
+ f.Captions, err = caption.Get_tx(ctx, tx, "pg_function", f.Id, []string{"pgFunctionTitle", "pgFunctionDesc"})
if err != nil {
return functions, err
}
@@ -80,27 +81,10 @@ func Get(moduleId uuid.UUID) ([]types.PgFunction, error) {
}
return functions, nil
}
-func getSchedules(pgFunctionId uuid.UUID) ([]types.PgFunctionSchedule, error) {
+func getSchedules_tx(ctx context.Context, tx pgx.Tx, pgFunctionId uuid.UUID) ([]types.PgFunctionSchedule, error) {
schedules := make([]types.PgFunctionSchedule, 0)
- tx, err := db.Pool.Begin(db.Ctx)
- if err != nil {
- return schedules, err
- }
- defer tx.Rollback(db.Ctx)
-
- schedules, err = getSchedules_tx(tx, pgFunctionId)
- if err != nil {
- return schedules, err
- }
- tx.Commit(db.Ctx)
-
- return schedules, nil
-}
-func getSchedules_tx(tx pgx.Tx, pgFunctionId uuid.UUID) ([]types.PgFunctionSchedule, error) {
- schedules := make([]types.PgFunctionSchedule, 0)
-
- rows, err := tx.Query(db.Ctx, `
+ rows, err := tx.Query(ctx, `
SELECT id, at_second, at_minute, at_hour, at_day, interval_type, interval_value
FROM app.pg_function_schedule
WHERE pg_function_id = $1
@@ -124,93 +108,92 @@ func getSchedules_tx(tx pgx.Tx, pgFunctionId uuid.UUID) ([]types.PgFunctionSched
return schedules, nil
}
-func Set_tx(tx pgx.Tx, moduleId uuid.UUID, id uuid.UUID, name string,
- codeArgs string, codeFunction string, codeReturns string,
- isFrontendExec bool, isTrigger bool, schedules []types.PgFunctionSchedule,
- captions types.CaptionMap) error {
+func Set_tx(ctx context.Context, tx pgx.Tx, fnc types.PgFunction) error {
- if err := check.DbIdentifier(name); err != nil {
+ if err := check.DbIdentifier(fnc.Name); err != nil {
return err
}
- nameMod, err := schema.GetModuleNameById_tx(tx, moduleId)
+ nameMod, err := schema.GetModuleNameById_tx(ctx, tx, fnc.ModuleId)
if err != nil {
return err
}
- // fix imports < 2.6: New "isTrigger" state
- if strings.ToUpper(codeReturns) == "TRIGGER" && !isTrigger {
- isTrigger = true
- }
+ // fix imports < 3.9: Missing volatility setting
+ fnc = compatible.FixMissingVolatility(fnc)
- // enforce trigger function
- if isTrigger {
- codeReturns = "TRIGGER"
- isFrontendExec = false
+ // enforce valid function configuration
+ if fnc.IsLoginSync {
+ fnc.CodeReturns = "INTEGER"
+ fnc.IsTrigger = false
+ fnc.IsFrontendExec = false
+ }
+ if fnc.IsTrigger {
+ fnc.CodeReturns = "TRIGGER"
+ fnc.IsFrontendExec = false
+ fnc.IsLoginSync = false
}
- if codeFunction == "" || codeReturns == "" {
+ if fnc.CodeFunction == "" || fnc.CodeReturns == "" {
return errors.New("empty function body or missing returns")
}
- known, err := schema.CheckCreateId_tx(tx, &id, "pg_function", "id")
+ known, err := schema.CheckCreateId_tx(ctx, tx, &fnc.Id, "pg_function", "id")
if err != nil {
return err
}
if known {
- _, nameEx, _, isTriggerEx, err := schema.GetPgFunctionDetailsById_tx(tx, id)
+ _, nameEx, _, isTriggerEx, err := schema.GetPgFunctionDetailsById_tx(ctx, tx, fnc.Id)
if err != nil {
return err
}
- if isTrigger != isTriggerEx {
+ if fnc.IsTrigger != isTriggerEx {
return errors.New("cannot convert between trigger and non-trigger function")
}
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
UPDATE app.pg_function
SET name = $1, code_args = $2, code_function = $3,
- code_returns = $4, is_frontend_exec = $5
- WHERE id = $6
- `, name, codeArgs, codeFunction, codeReturns, isFrontendExec, id); err != nil {
+ code_returns = $4, is_frontend_exec = $5, volatility = $6
+ WHERE id = $7
+ `, fnc.Name, fnc.CodeArgs, fnc.CodeFunction, fnc.CodeReturns, fnc.IsFrontendExec, fnc.Volatility, fnc.Id); err != nil {
return err
}
- if name != nameEx {
- if err := RecreateAffectedBy_tx(tx, "pg_function", id); err != nil {
+ if fnc.Name != nameEx {
+ if err := RecreateAffectedBy_tx(ctx, tx, "pg_function", fnc.Id); err != nil {
return fmt.Errorf("failed to recreate affected PG functions, %s", err)
}
}
- if !isTrigger {
- // drop and recreate non-trigger function because function arguments can change
+ if !fnc.IsTrigger {
+ // drop non-trigger function because function arguments can change
// two functions with the same name but different interfaces can exist (overloading)
- if _, err := tx.Exec(db.Ctx, fmt.Sprintf(`
- DROP FUNCTION "%s"."%s"
- `, nameMod, nameEx)); err != nil {
+ if _, err := tx.Exec(ctx, fmt.Sprintf(`DROP FUNCTION "%s"."%s"`, nameMod, nameEx)); err != nil {
return err
}
} else {
- if name != nameEx {
+ if fnc.Name != nameEx {
// rename instead of drop function if trigger
// we cannot drop trigger functions without recreating triggers
// renaming changes the function name in the trigger and allows us to replace it
// as triggers do not take arguments, overloading is not a problem
- if _, err := tx.Exec(db.Ctx, fmt.Sprintf(`
+ if _, err := tx.Exec(ctx, fmt.Sprintf(`
ALTER FUNCTION "%s"."%s" RENAME TO "%s"
- `, nameMod, nameEx, name)); err != nil {
+ `, nameMod, nameEx, fnc.Name)); err != nil {
return err
}
}
}
} else {
- if _, err := tx.Exec(db.Ctx, `
- INSERT INTO app.pg_function (id, module_id, name, code_args,
- code_function, code_returns, is_frontend_exec, is_trigger)
- VALUES ($1,$2,$3,$4,$5,$6,$7,$8)
- `, id, moduleId, name, codeArgs, codeFunction,
- codeReturns, isFrontendExec, isTrigger); err != nil {
+ if _, err := tx.Exec(ctx, `
+ INSERT INTO app.pg_function (id, module_id, name, code_args, code_function,
+ code_returns, is_frontend_exec, is_login_sync, is_trigger, volatility)
+ VALUES ($1,$2,$3,$4,$5,$6,$7,$8,$9,$10)
+ `, fnc.Id, fnc.ModuleId, fnc.Name, fnc.CodeArgs, fnc.CodeFunction,
+ fnc.CodeReturns, fnc.IsFrontendExec, fnc.IsLoginSync, fnc.IsTrigger, fnc.Volatility); err != nil {
return err
}
@@ -218,15 +201,18 @@ func Set_tx(tx pgx.Tx, moduleId uuid.UUID, id uuid.UUID, name string,
// set schedules
scheduleIds := make([]uuid.UUID, 0)
- for _, s := range schedules {
+ for _, s := range fnc.Schedules {
- known, err = schema.CheckCreateId_tx(tx, &s.Id, "pg_function_schedule", "id")
+ known, err = schema.CheckCreateId_tx(ctx, tx, &s.Id, "pg_function_schedule", "id")
if err != nil {
return err
}
+ // overwrite invalid inputs
+ s.AtDay = schema.GetValidAtDay(s.IntervalType, s.AtDay)
+
if known {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
UPDATE app.pg_function_schedule
SET at_second = $1, at_minute = $2, at_hour = $3, at_day = $4,
interval_type = $5, interval_value = $6
@@ -237,18 +223,18 @@ func Set_tx(tx pgx.Tx, moduleId uuid.UUID, id uuid.UUID, name string,
return err
}
} else {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
INSERT INTO app.pg_function_schedule (
id, pg_function_id, at_second, at_minute, at_hour, at_day,
interval_type, interval_value
)
VALUES ($1,$2,$3,$4,$5,$6,$7,$8)
- `, s.Id, id, s.AtSecond, s.AtMinute, s.AtHour, s.AtDay,
+ `, s.Id, fnc.Id, s.AtSecond, s.AtMinute, s.AtHour, s.AtDay,
s.IntervalType, s.IntervalValue); err != nil {
return err
}
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
INSERT INTO instance.schedule (
pg_function_schedule_id,date_attempt,date_success
)
@@ -260,52 +246,52 @@ func Set_tx(tx pgx.Tx, moduleId uuid.UUID, id uuid.UUID, name string,
scheduleIds = append(scheduleIds, s.Id)
}
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
DELETE FROM app.pg_function_schedule
WHERE pg_function_id = $1
AND id <> ALL($2)
- `, id, scheduleIds); err != nil {
+ `, fnc.Id, scheduleIds); err != nil {
return err
}
// set captions
- if err := caption.Set_tx(tx, id, captions); err != nil {
+ if err := caption.Set_tx(ctx, tx, fnc.Id, fnc.Captions); err != nil {
return err
}
// apply function to database
- codeFunction, err = processDependentIds_tx(tx, id, codeFunction)
+ fnc.CodeFunction, err = processDependentIds_tx(ctx, tx, fnc.Id, fnc.CodeFunction)
if err != nil {
return fmt.Errorf("failed to process entity IDs, %s", err)
}
- _, err = tx.Exec(db.Ctx, fmt.Sprintf(`
+ _, err = tx.Exec(ctx, fmt.Sprintf(`
CREATE OR REPLACE FUNCTION "%s"."%s"(%s)
- RETURNS %s LANGUAGE plpgsql AS %s
- `, nameMod, name, codeArgs, codeReturns, codeFunction))
+ RETURNS %s LANGUAGE plpgsql %s AS %s
+ `, nameMod, fnc.Name, fnc.CodeArgs, fnc.CodeReturns, fnc.Volatility, fnc.CodeFunction))
return err
}
// recreate all PG functions, affected by a changed entity for which a dependency exists
// relevant entities: modules, relations, attributes, pg functions
-func RecreateAffectedBy_tx(tx pgx.Tx, entity string, entityId uuid.UUID) error {
+func RecreateAffectedBy_tx(ctx context.Context, tx pgx.Tx, entity string, entityId uuid.UUID) error {
pgFunctionIds := make([]uuid.UUID, 0)
- if !tools.StringInSlice(entity, []string{"module", "relation", "attribute", "pg_function"}) {
+ if !slices.Contains([]string{"module", "relation", "attribute", "pg_function"}, entity) {
return errors.New("unknown dependent on entity for pg function")
}
// stay in transaction to get altered states
- rows, err := tx.Query(db.Ctx, fmt.Sprintf(`
+ rows, err := tx.Query(ctx, fmt.Sprintf(`
SELECT pg_function_id
FROM app.pg_function_depends
WHERE %s_id_on = $1
`, entity), entityId)
if err != nil {
- return fmt.Errorf("failed to get PG function ID for %s ID %s: %w",
- entity, entityId, err)
+ return fmt.Errorf("failed to get PG function ID for %s ID %s: %w", entity, entityId, err)
}
+ defer rows.Close()
for rows.Next() {
var id uuid.UUID
@@ -314,35 +300,30 @@ func RecreateAffectedBy_tx(tx pgx.Tx, entity string, entityId uuid.UUID) error {
}
pgFunctionIds = append(pgFunctionIds, id)
}
- rows.Close()
for _, id := range pgFunctionIds {
var f types.PgFunction
- if err := tx.QueryRow(db.Ctx, `
+ if err := tx.QueryRow(ctx, `
SELECT id, module_id, name, code_args, code_function, code_returns,
- is_frontend_exec, is_trigger
+ is_frontend_exec, is_login_sync, is_trigger, volatility
FROM app.pg_function
WHERE id = $1
- `, id).Scan(&f.Id, &f.ModuleId, &f.Name, &f.CodeArgs, &f.CodeFunction,
- &f.CodeReturns, &f.IsFrontendExec, &f.IsTrigger); err != nil {
+ `, id).Scan(&f.Id, &f.ModuleId, &f.Name, &f.CodeArgs, &f.CodeFunction, &f.CodeReturns,
+ &f.IsFrontendExec, &f.IsLoginSync, &f.IsTrigger, &f.Volatility); err != nil {
return err
}
- f.Schedules, err = getSchedules_tx(tx, f.Id)
+ f.Schedules, err = getSchedules_tx(ctx, tx, f.Id)
if err != nil {
return err
}
- f.Captions, err = caption.Get("pg_function", f.Id, []string{"pgFunctionTitle", "pgFunctionDesc"})
+ f.Captions, err = caption.Get_tx(ctx, tx, "pg_function", f.Id, []string{"pgFunctionTitle", "pgFunctionDesc"})
if err != nil {
return err
}
-
- if err := Set_tx(tx, f.ModuleId, f.Id, f.Name, f.CodeArgs,
- f.CodeFunction, f.CodeReturns, f.IsFrontendExec, f.IsTrigger,
- f.Schedules, f.Captions); err != nil {
-
+ if err := Set_tx(ctx, tx, f); err != nil {
return err
}
}
@@ -353,10 +334,10 @@ func RecreateAffectedBy_tx(tx pgx.Tx, entity string, entityId uuid.UUID) error {
// as entity names can change any time, keeping IDs is safer
// to create a PG function, we need to replace these IDs with proper names
// we also store IDs of all entities so that we can create foreign keys and ensure consistency
-func processDependentIds_tx(tx pgx.Tx, id uuid.UUID, body string) (string, error) {
+func processDependentIds_tx(ctx context.Context, tx pgx.Tx, id uuid.UUID, body string) (string, error) {
// rebuilt dependency records for this function
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
DELETE FROM app.pg_function_depends
WHERE pg_function_id = $1
`, id); err != nil {
@@ -383,12 +364,12 @@ func processDependentIds_tx(tx pgx.Tx, id uuid.UUID, body string) (string, error
}
idMap[modId] = true
- modName, err := schema.GetModuleNameById_tx(tx, modId)
+ modName, err := schema.GetModuleNameById_tx(ctx, tx, modId)
if err != nil {
return "", err
}
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
INSERT INTO app.pg_function_depends (pg_function_id, module_id_on)
VALUES ($1,$2)
`, id, modId); err != nil {
@@ -417,12 +398,12 @@ func processDependentIds_tx(tx pgx.Tx, id uuid.UUID, body string) (string, error
}
idMap[fncId] = true
- fncName, err := schema.GetPgFunctionNameById_tx(tx, fncId)
+ fncName, err := schema.GetPgFunctionNameById_tx(ctx, tx, fncId)
if err != nil {
return "", err
}
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
INSERT INTO app.pg_function_depends (pg_function_id, pg_function_id_on)
VALUES ($1,$2)
`, id, fncId); err != nil {
@@ -451,12 +432,12 @@ func processDependentIds_tx(tx pgx.Tx, id uuid.UUID, body string) (string, error
}
idMap[relId] = true
- _, relName, err := schema.GetRelationNamesById_tx(tx, relId)
+ _, relName, err := schema.GetRelationNamesById_tx(ctx, tx, relId)
if err != nil {
return "", err
}
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
INSERT INTO app.pg_function_depends (pg_function_id, relation_id_on)
VALUES ($1,$2)
`, id, relId); err != nil {
@@ -485,12 +466,12 @@ func processDependentIds_tx(tx pgx.Tx, id uuid.UUID, body string) (string, error
}
idMap[atrId] = true
- atrName, err := schema.GetAttributeNameById_tx(tx, atrId)
+ atrName, err := schema.GetAttributeNameById_tx(ctx, tx, atrId)
if err != nil {
return "", err
}
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
INSERT INTO app.pg_function_depends (pg_function_id, attribute_id_on)
VALUES ($1,$2)
`, id, atrId); err != nil {
diff --git a/schema/pgIndex/pgIndex.go b/schema/pgIndex/pgIndex.go
index 99d779d6..7646e351 100644
--- a/schema/pgIndex/pgIndex.go
+++ b/schema/pgIndex/pgIndex.go
@@ -1,10 +1,11 @@
package pgIndex
import (
+ "context"
"errors"
"fmt"
- "r3/db"
"r3/schema"
+ "r3/schema/compatible"
"r3/types"
"strings"
@@ -12,12 +13,12 @@ import (
"github.com/jackc/pgx/v5"
)
-func DelAutoFkiForAttribute_tx(tx pgx.Tx, attributeId uuid.UUID) error {
+func DelAutoFkiForAttribute_tx(ctx context.Context, tx pgx.Tx, attributeId uuid.UUID) error {
// get ID of automatically created FK index for relationship attribute
var pgIndexId uuid.UUID
- err := tx.QueryRow(db.Ctx, `
+ err := tx.QueryRow(ctx, `
SELECT i.id
FROM app.pg_index AS i
INNER JOIN app.pg_index_attribute AS a ON a.pg_index_id = i.id
@@ -36,56 +37,58 @@ func DelAutoFkiForAttribute_tx(tx pgx.Tx, attributeId uuid.UUID) error {
}
// delete auto FK index for attribute
- return Del_tx(tx, pgIndexId)
+ return Del_tx(ctx, tx, pgIndexId)
}
-func Del_tx(tx pgx.Tx, id uuid.UUID) error {
+func Del_tx(ctx context.Context, tx pgx.Tx, id uuid.UUID) error {
- moduleName, _, err := schema.GetPgIndexNamesById_tx(tx, id)
+ moduleName, _, err := schema.GetPgIndexNamesById_tx(ctx, tx, id)
if err != nil {
return err
}
// can also be deleted by cascaded entity (relation/attribute)
// drop if it still exists
- if _, err := tx.Exec(db.Ctx, fmt.Sprintf(`
+ if _, err := tx.Exec(ctx, fmt.Sprintf(`
DROP INDEX IF EXISTS "%s"."%s"
`, moduleName, schema.GetPgIndexName(id))); err != nil {
return err
}
- _, err = tx.Exec(db.Ctx, `DELETE FROM app.pg_index WHERE id = $1`, id)
+ _, err = tx.Exec(ctx, `DELETE FROM app.pg_index WHERE id = $1`, id)
return err
}
-func Get(relationId uuid.UUID) ([]types.PgIndex, error) {
+func Get_tx(ctx context.Context, tx pgx.Tx, relationId uuid.UUID) ([]types.PgIndex, error) {
pgIndexes := make([]types.PgIndex, 0)
- rows, err := db.Pool.Query(db.Ctx, `
- SELECT id, no_duplicates, auto_fki, primary_key
+ rows, err := tx.Query(ctx, `
+ SELECT id, attribute_id_dict, method, no_duplicates, auto_fki, primary_key
FROM app.pg_index
WHERE relation_id = $1
- -- an order is required for hash comparisson (module changes)
- ORDER BY auto_fki DESC, id ASC
+ -- an order is required for hash comparison (module changes)
+ ORDER BY primary_key DESC, auto_fki DESC, id ASC
`, relationId)
if err != nil {
return pgIndexes, err
}
+ defer rows.Close()
for rows.Next() {
var pgi types.PgIndex
- if err := rows.Scan(&pgi.Id, &pgi.NoDuplicates, &pgi.AutoFki, &pgi.PrimaryKey); err != nil {
+ if err := rows.Scan(&pgi.Id, &pgi.AttributeIdDict, &pgi.Method,
+ &pgi.NoDuplicates, &pgi.AutoFki, &pgi.PrimaryKey); err != nil {
+
return pgIndexes, err
}
pgi.RelationId = relationId
pgIndexes = append(pgIndexes, pgi)
}
- rows.Close()
// get index attributes
for i, pgi := range pgIndexes {
- pgi.Attributes, err = GetAttributes(pgi.Id)
+ pgi.Attributes, err = getAttributes_tx(ctx, tx, pgi.Id)
if err != nil {
return pgIndexes, err
}
@@ -94,10 +97,10 @@ func Get(relationId uuid.UUID) ([]types.PgIndex, error) {
return pgIndexes, nil
}
-func GetAttributes(pgIndexId uuid.UUID) ([]types.PgIndexAttribute, error) {
+func getAttributes_tx(ctx context.Context, tx pgx.Tx, pgIndexId uuid.UUID) ([]types.PgIndexAttribute, error) {
attributes := make([]types.PgIndexAttribute, 0)
- rows, err := db.Pool.Query(db.Ctx, `
+ rows, err := tx.Query(ctx, `
SELECT attribute_id, order_asc
FROM app.pg_index_attribute
WHERE pg_index_id = $1
@@ -120,15 +123,16 @@ func GetAttributes(pgIndexId uuid.UUID) ([]types.PgIndexAttribute, error) {
return attributes, nil
}
-func SetAutoFkiForAttribute_tx(tx pgx.Tx, relationId uuid.UUID, attributeId uuid.UUID, noDuplicates bool) error {
- return Set_tx(tx, types.PgIndex{
+func SetAutoFkiForAttribute_tx(ctx context.Context, tx pgx.Tx, relationId uuid.UUID, attributeId uuid.UUID, noDuplicates bool) error {
+ return Set_tx(ctx, tx, types.PgIndex{
Id: uuid.Nil,
RelationId: relationId,
AutoFki: true,
+ Method: "BTREE",
NoDuplicates: noDuplicates,
PrimaryKey: false,
Attributes: []types.PgIndexAttribute{
- types.PgIndexAttribute{
+ {
AttributeId: attributeId,
Position: 0,
OrderAsc: true,
@@ -136,15 +140,16 @@ func SetAutoFkiForAttribute_tx(tx pgx.Tx, relationId uuid.UUID, attributeId uuid
},
})
}
-func SetPrimaryKeyForAttribute_tx(tx pgx.Tx, relationId uuid.UUID, attributeId uuid.UUID) error {
- return Set_tx(tx, types.PgIndex{
+func SetPrimaryKeyForAttribute_tx(ctx context.Context, tx pgx.Tx, relationId uuid.UUID, attributeId uuid.UUID) error {
+ return Set_tx(ctx, tx, types.PgIndex{
Id: uuid.Nil,
RelationId: relationId,
AutoFki: false,
+ Method: "BTREE",
NoDuplicates: true,
PrimaryKey: true,
Attributes: []types.PgIndexAttribute{
- types.PgIndexAttribute{
+ {
AttributeId: attributeId,
Position: 0,
OrderAsc: true,
@@ -152,13 +157,14 @@ func SetPrimaryKeyForAttribute_tx(tx pgx.Tx, relationId uuid.UUID, attributeId u
},
})
}
-func Set_tx(tx pgx.Tx, pgi types.PgIndex) error {
+func Set_tx(ctx context.Context, tx pgx.Tx, pgi types.PgIndex) error {
if len(pgi.Attributes) == 0 {
return errors.New("cannot create index without attributes")
}
- known, err := schema.CheckCreateId_tx(tx, &pgi.Id, "pg_index", "id")
+ var err error
+ known, err := schema.CheckCreateId_tx(ctx, tx, &pgi.Id, "pg_index", "id")
if err != nil {
return err
}
@@ -168,32 +174,38 @@ func Set_tx(tx pgx.Tx, pgi types.PgIndex) error {
return nil
}
- // insert pg index reference
- if _, err := tx.Exec(db.Ctx, `
- INSERT INTO app.pg_index (
- id, relation_id, no_duplicates, auto_fki, primary_key)
- VALUES ($1,$2,$3,$4,$5)
- `, pgi.Id, pgi.RelationId, pgi.NoDuplicates, pgi.AutoFki, pgi.PrimaryKey); err != nil {
- return err
- }
+ pgi.Method = compatible.FixPgIndexMethod(pgi.Method)
- // work out PG index columns
- indexCols := make([]string, 0)
- for position, atr := range pgi.Attributes {
+ isGin := pgi.Method == "GIN"
+ isBtree := pgi.Method == "BTREE"
- name, err := schema.GetAttributeNameById_tx(tx, atr.AttributeId)
- if err != nil {
- return err
- }
+ if !isGin && !isBtree {
+ return fmt.Errorf("unsupported index type '%s'", pgi.Method)
+ }
- order := "ASC"
- if !atr.OrderAsc {
- order = "DESC"
- }
- indexCols = append(indexCols, fmt.Sprintf(`"%s" %s`, name, order))
+ if isGin && len(pgi.Attributes) != 1 {
+ // we currently use GIN exclusively with to_tsvector on a single column
+ // reason: doing any regular lookup (such as quick filters) checks attributes individually
+ // the same with complex filters where each line is a single attribute
+ return fmt.Errorf("text index must have a single attribute")
+ }
- // insert index attribute references
- if _, err := tx.Exec(db.Ctx, `
+ if isGin {
+ // no unique constraints on GIN
+ pgi.NoDuplicates = false
+ }
+
+ // insert pg index references
+ if _, err := tx.Exec(ctx, `
+ INSERT INTO app.pg_index (id, relation_id, attribute_id_dict,
+ method, no_duplicates, auto_fki, primary_key)
+ VALUES ($1,$2,$3,$4,$5,$6,$7)
+ `, pgi.Id, pgi.RelationId, pgi.AttributeIdDict, pgi.Method,
+ pgi.NoDuplicates, pgi.AutoFki, pgi.PrimaryKey); err != nil {
+ return err
+ }
+ for position, atr := range pgi.Attributes {
+ if _, err := tx.Exec(ctx, `
INSERT INTO app.pg_index_attribute (
pg_index_id, attribute_id, position, order_asc)
VALUES ($1,$2,$3,$4)
@@ -208,20 +220,59 @@ func Set_tx(tx pgx.Tx, pgi types.PgIndex) error {
}
// create index in module
- moduleName, relationName, err := schema.GetRelationNamesById_tx(tx, pgi.RelationId)
+ indexDef := ""
+ if isBtree {
+ indexCols := make([]string, 0)
+ for _, atr := range pgi.Attributes {
+ name, err := schema.GetAttributeNameById_tx(ctx, tx, atr.AttributeId)
+ if err != nil {
+ return err
+ }
+ order := "ASC"
+ if !atr.OrderAsc {
+ order = "DESC"
+ }
+ indexCols = append(indexCols, fmt.Sprintf(`"%s" %s`, name, order))
+ }
+ indexDef = fmt.Sprintf("BTREE (%s)", strings.Join(indexCols, ","))
+ }
+
+ if isGin {
+ nameDict := ""
+ if pgi.AttributeIdDict.Valid {
+ nameDict, err = schema.GetAttributeNameById_tx(ctx, tx, pgi.AttributeIdDict.Bytes)
+ if err != nil {
+ return err
+ }
+ }
+
+ name, err := schema.GetAttributeNameById_tx(ctx, tx, pgi.Attributes[0].AttributeId)
+ if err != nil {
+ return err
+ }
+
+ if nameDict == "" {
+ indexDef = fmt.Sprintf("GIN (TO_TSVECTOR('simple'::REGCONFIG,%s))", name)
+ } else {
+ indexDef = fmt.Sprintf("GIN (TO_TSVECTOR(CASE WHEN %s IS NULL THEN 'simple'::REGCONFIG ELSE %s END,%s))",
+ nameDict, nameDict, name)
+ }
+
+ }
+
+ modName, relName, err := schema.GetRelationNamesById_tx(ctx, tx, pgi.RelationId)
if err != nil {
return err
}
- options := "INDEX"
+ indexType := "INDEX"
if pgi.NoDuplicates {
- options = "UNIQUE INDEX"
+ indexType = "UNIQUE INDEX"
}
- _, err = tx.Exec(db.Ctx, fmt.Sprintf(`
- CREATE %s "%s" ON "%s"."%s" (%s)
- `, options, schema.GetPgIndexName(pgi.Id), moduleName, relationName,
- strings.Join(indexCols, ",")))
+ _, err = tx.Exec(ctx, fmt.Sprintf(`
+ CREATE %s "%s" ON "%s"."%s" USING %s
+ `, indexType, schema.GetPgIndexName(pgi.Id), modName, relName, indexDef))
return err
}
diff --git a/schema/pgTrigger/pgTrigger.go b/schema/pgTrigger/pgTrigger.go
index 77e00f64..19adcd24 100644
--- a/schema/pgTrigger/pgTrigger.go
+++ b/schema/pgTrigger/pgTrigger.go
@@ -1,32 +1,32 @@
package pgTrigger
import (
+ "context"
"errors"
"fmt"
- "r3/db"
"r3/schema"
- "r3/tools"
"r3/types"
+ "slices"
"strings"
"github.com/gofrs/uuid"
"github.com/jackc/pgx/v5"
)
-func Del_tx(tx pgx.Tx, id uuid.UUID) error {
+func Del_tx(ctx context.Context, tx pgx.Tx, id uuid.UUID) error {
- nameMod, nameRel, err := schema.GetPgTriggerNamesById_tx(tx, id)
+ nameMod, nameRel, err := schema.GetPgTriggerNamesById_tx(ctx, tx, id)
if err != nil {
return err
}
- if _, err := tx.Exec(db.Ctx, fmt.Sprintf(`
+ if _, err := tx.Exec(ctx, fmt.Sprintf(`
DROP TRIGGER "%s" ON "%s"."%s"
`, getName(id), nameMod, nameRel)); err != nil {
return err
}
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
DELETE FROM app.pg_trigger
WHERE id = $1
`, id); err != nil {
@@ -35,18 +35,17 @@ func Del_tx(tx pgx.Tx, id uuid.UUID) error {
return nil
}
-func Get(relationId uuid.UUID) ([]types.PgTrigger, error) {
-
+func Get_tx(ctx context.Context, tx pgx.Tx, moduleId uuid.UUID) ([]types.PgTrigger, error) {
triggers := make([]types.PgTrigger, 0)
- rows, err := db.Pool.Query(db.Ctx, `
- SELECT id, pg_function_id, on_insert, on_update, on_delete,
+ rows, err := tx.Query(ctx, `
+ SELECT id, relation_id, pg_function_id, on_insert, on_update, on_delete,
is_constraint, is_deferrable, is_deferred, per_row, fires,
code_condition
FROM app.pg_trigger
- WHERE relation_id = $1
- ORDER BY id ASC -- an order is required for hash comparisson (module changes)
- `, relationId)
+ WHERE module_id = $1
+ ORDER BY id ASC -- an order is required for hash comparison (module changes)
+ `, moduleId)
if err != nil {
return triggers, err
}
@@ -55,76 +54,75 @@ func Get(relationId uuid.UUID) ([]types.PgTrigger, error) {
for rows.Next() {
var t types.PgTrigger
- if err := rows.Scan(&t.Id, &t.PgFunctionId, &t.OnInsert, &t.OnUpdate,
- &t.OnDelete, &t.IsConstraint, &t.IsDeferrable, &t.IsDeferred,
- &t.PerRow, &t.Fires, &t.CodeCondition); err != nil {
+ if err := rows.Scan(&t.Id, &t.RelationId, &t.PgFunctionId, &t.OnInsert,
+ &t.OnUpdate, &t.OnDelete, &t.IsConstraint, &t.IsDeferrable,
+ &t.IsDeferred, &t.PerRow, &t.Fires, &t.CodeCondition); err != nil {
return triggers, err
}
- t.RelationId = relationId
+ t.ModuleId = moduleId
triggers = append(triggers, t)
}
return triggers, nil
}
-func Set_tx(tx pgx.Tx, pgFunctionId uuid.UUID, id uuid.UUID,
- relationId uuid.UUID, onInsert bool, onUpdate bool, onDelete bool,
- isConstraint bool, isDeferrable bool, isDeferred bool, perRow bool,
- fires string, codeCondition string) error {
+func Set_tx(ctx context.Context, tx pgx.Tx, trg types.PgTrigger) error {
- nameMod, nameRel, err := schema.GetRelationNamesById_tx(tx, relationId)
+ nameMod, nameRel, err := schema.GetRelationNamesById_tx(ctx, tx, trg.RelationId)
if err != nil {
return err
}
- known, err := schema.CheckCreateId_tx(tx, &id, "pg_trigger", "id")
+ known, err := schema.CheckCreateId_tx(ctx, tx, &trg.Id, "pg_trigger", "id")
if err != nil {
return err
}
// overwrite invalid options
- if !tools.StringInSlice(fires, []string{"BEFORE", "AFTER"}) {
+ if !slices.Contains([]string{"BEFORE", "AFTER"}, trg.Fires) {
return errors.New("invalid trigger start")
}
- if !perRow || fires != "AFTER" { // constraint trigger must be AFTER EACH ROW
- isConstraint = false
- isDeferrable = false
- isDeferred = false
- } else if !isConstraint { // deferrable only available for constraint triggers
- isDeferrable = false
- isDeferred = false
- } else if !isDeferrable { // cannot defer, non-deferrable trigger<
- isDeferred = false
+ if !trg.PerRow || trg.Fires != "AFTER" { // constraint trigger must be AFTER EACH ROW
+ trg.IsConstraint = false
+ trg.IsDeferrable = false
+ trg.IsDeferred = false
+ } else if !trg.IsConstraint { // deferrable only available for constraint triggers
+ trg.IsDeferrable = false
+ trg.IsDeferred = false
+ } else if !trg.IsDeferrable { // cannot defer, non-deferrable trigger<
+ trg.IsDeferred = false
}
if known {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
UPDATE app.pg_trigger
SET pg_function_id = $1, on_insert = $2, on_update = $3,
on_delete = $4, is_constraint = $5, is_deferrable = $6,
is_deferred = $7, per_row = $8, fires = $9, code_condition = $10
WHERE id = $11
- `, pgFunctionId, onInsert, onUpdate, onDelete, isConstraint, isDeferrable,
- isDeferred, perRow, fires, codeCondition, id); err != nil {
+ `, trg.PgFunctionId, trg.OnInsert, trg.OnUpdate, trg.OnDelete,
+ trg.IsConstraint, trg.IsDeferrable, trg.IsDeferred, trg.PerRow,
+ trg.Fires, trg.CodeCondition, trg.Id); err != nil {
return err
}
// remove existing trigger
- if _, err := tx.Exec(db.Ctx, fmt.Sprintf(`
+ if _, err := tx.Exec(ctx, fmt.Sprintf(`
DROP TRIGGER "%s" ON "%s"."%s"
- `, getName(id), nameMod, nameRel)); err != nil {
+ `, getName(trg.Id), nameMod, nameRel)); err != nil {
return err
}
} else {
- if _, err := tx.Exec(db.Ctx, `
- INSERT INTO app.pg_trigger (id, pg_function_id, relation_id,
+ if _, err := tx.Exec(ctx, `
+ INSERT INTO app.pg_trigger (id, module_id, pg_function_id, relation_id,
on_insert, on_update, on_delete, is_constraint, is_deferrable,
is_deferred, per_row, fires, code_condition)
- VALUES ($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12)
- `, id, pgFunctionId, relationId, onInsert, onUpdate, onDelete, isConstraint,
- isDeferrable, isDeferred, perRow, fires, codeCondition); err != nil {
+ VALUES ($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13)
+ `, trg.Id, trg.ModuleId, trg.PgFunctionId, trg.RelationId, trg.OnInsert,
+ trg.OnUpdate, trg.OnDelete, trg.IsConstraint, trg.IsDeferrable,
+ trg.IsDeferred, trg.PerRow, trg.Fires, trg.CodeCondition); err != nil {
return err
}
@@ -132,13 +130,13 @@ func Set_tx(tx pgx.Tx, pgFunctionId uuid.UUID, id uuid.UUID,
// process options
events := make([]string, 0)
- if onInsert {
+ if trg.OnInsert {
events = append(events, "INSERT")
}
- if onUpdate {
+ if trg.OnUpdate {
events = append(events, "UPDATE")
}
- if onDelete {
+ if trg.OnDelete {
events = append(events, "DELETE")
}
if len(events) == 0 {
@@ -146,23 +144,23 @@ func Set_tx(tx pgx.Tx, pgFunctionId uuid.UUID, id uuid.UUID,
}
forEach := "STATEMENT"
- if perRow {
+ if trg.PerRow {
forEach = "ROW"
}
condition := ""
- if codeCondition != "" {
- condition = fmt.Sprintf("WHEN (%s)", codeCondition)
+ if trg.CodeCondition != "" {
+ condition = fmt.Sprintf("WHEN (%s)", trg.CodeCondition)
}
// constraint trigger options
triggerType := "TRIGGER"
constraint := ""
- if isConstraint {
+ if trg.IsConstraint {
triggerType = "CONSTRAINT TRIGGER"
- if isDeferrable {
- if !isDeferred {
+ if trg.IsDeferrable {
+ if !trg.IsDeferred {
constraint = "DEFERRABLE"
} else {
constraint = "DEFERRABLE INITIALLY DEFERRED"
@@ -171,12 +169,12 @@ func Set_tx(tx pgx.Tx, pgFunctionId uuid.UUID, id uuid.UUID,
}
// create trigger
- _, nameFnc, argsFnc, _, err := schema.GetPgFunctionDetailsById_tx(tx, pgFunctionId)
+ nameModFnc, nameFnc, argsFnc, _, err := schema.GetPgFunctionDetailsById_tx(ctx, tx, trg.PgFunctionId)
if err != nil {
return err
}
- if _, err := tx.Exec(db.Ctx, fmt.Sprintf(`
+ if _, err := tx.Exec(ctx, fmt.Sprintf(`
CREATE %s "%s"
%s %s
ON "%s"."%s"
@@ -184,13 +182,13 @@ func Set_tx(tx pgx.Tx, pgFunctionId uuid.UUID, id uuid.UUID,
FOR EACH %s
%s
EXECUTE FUNCTION "%s"."%s"(%s)
- `, triggerType, getName(id),
- fires, strings.Join(events, " OR "),
+ `, triggerType, getName(trg.Id),
+ trg.Fires, strings.Join(events, " OR "),
nameMod, nameRel,
constraint,
forEach,
condition,
- nameMod, nameFnc, argsFnc)); err != nil {
+ nameModFnc, nameFnc, argsFnc)); err != nil {
return err
}
diff --git a/schema/preset/preset.go b/schema/preset/preset.go
index 411b0e48..f402bb8d 100644
--- a/schema/preset/preset.go
+++ b/schema/preset/preset.go
@@ -1,10 +1,11 @@
package preset
import (
+ "context"
"errors"
"fmt"
- "r3/db"
"r3/schema"
+ "r3/schema/compatible"
"r3/types"
"strings"
@@ -12,18 +13,18 @@ import (
"github.com/jackc/pgx/v5"
)
-func Del_tx(tx pgx.Tx, id uuid.UUID) error {
+func Del_tx(ctx context.Context, tx pgx.Tx, id uuid.UUID) error {
var recordId int64
var modName, relName string
var protected bool
- if err := tx.QueryRow(db.Ctx, `
+ if err := tx.QueryRow(ctx, `
SELECT pr.record_id_wofk, r.name, m.name, p.protected
FROM app.preset AS p
INNER JOIN instance.preset_record AS pr ON pr.preset_id = p.id
- INNER JOIN app.relation AS r ON r.id = p.relation_id
- INNER JOIN app.module AS m ON m.id = r.module_id
+ INNER JOIN app.relation AS r ON r.id = p.relation_id
+ INNER JOIN app.module AS m ON m.id = r.module_id
WHERE p.id = $1
`, id).Scan(&recordId, &relName, &modName, &protected); err != nil && err != pgx.ErrNoRows {
return err
@@ -33,7 +34,7 @@ func Del_tx(tx pgx.Tx, id uuid.UUID) error {
// protected records are system-relevant and are controlled by the module author, they decided when they are deleted
// non-protected records are optional and can be controlled by the instance users, they might want to keep them
if protected && recordId != 0 {
- if _, err := tx.Exec(db.Ctx, fmt.Sprintf(`
+ if _, err := tx.Exec(ctx, fmt.Sprintf(`
DELETE FROM "%s"."%s"
WHERE id = $1
`, modName, relName), recordId); err != nil {
@@ -41,7 +42,7 @@ func Del_tx(tx pgx.Tx, id uuid.UUID) error {
}
}
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
DELETE FROM app.preset
WHERE id = $1
`, id); err != nil {
@@ -50,11 +51,11 @@ func Del_tx(tx pgx.Tx, id uuid.UUID) error {
return nil
}
-func Get(relationId uuid.UUID) ([]types.Preset, error) {
+func Get_tx(ctx context.Context, tx pgx.Tx, relationId uuid.UUID) ([]types.Preset, error) {
presets := make([]types.Preset, 0)
- rows, err := db.Pool.Query(db.Ctx, `
+ rows, err := tx.Query(ctx, `
SELECT id, name, protected
FROM app.preset
WHERE relation_id = $1
@@ -63,22 +64,20 @@ func Get(relationId uuid.UUID) ([]types.Preset, error) {
if err != nil {
return presets, err
}
+ defer rows.Close()
for rows.Next() {
var p types.Preset
if err := rows.Scan(&p.Id, &p.Name, &p.Protected); err != nil {
- rows.Close()
return presets, err
}
p.RelationId = relationId
presets = append(presets, p)
}
- rows.Close()
// get preset values
for i, p := range presets {
-
- presets[i].Values, err = getValues(p.Id)
+ presets[i].Values, err = getValues_tx(ctx, tx, p.Id)
if err != nil {
return presets, err
}
@@ -89,26 +88,25 @@ func Get(relationId uuid.UUID) ([]types.Preset, error) {
// set preset
// included setting of preset values and creation/update of preset record
// returns whether preset record was created/updated
-func Set_tx(tx pgx.Tx, relationId uuid.UUID, id uuid.UUID, name string,
+func Set_tx(ctx context.Context, tx pgx.Tx, relationId uuid.UUID, id uuid.UUID, name string,
protected bool, values []types.PresetValue) error {
if len(values) == 0 {
return errors.New("cannot set preset with zero values")
}
- // resolve dependencies
- modName, relName, err := schema.GetRelationNamesById_tx(tx, relationId)
+ modName, relName, err := schema.GetRelationNamesById_tx(ctx, tx, relationId)
if err != nil {
return err
}
- known, err := schema.CheckCreateId_tx(tx, &id, "preset", "id")
+ known, err := schema.CheckCreateId_tx(ctx, tx, &id, "preset", "id")
if err != nil {
return err
}
if known {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
UPDATE app.preset
SET name = $1, protected = $2
WHERE id = $3
@@ -116,7 +114,7 @@ func Set_tx(tx pgx.Tx, relationId uuid.UUID, id uuid.UUID, name string,
return err
}
} else {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
INSERT INTO app.preset (id, relation_id, name, protected)
VALUES ($1,$2,$3,$4)
`, id, relationId, name, protected); err != nil {
@@ -125,7 +123,7 @@ func Set_tx(tx pgx.Tx, relationId uuid.UUID, id uuid.UUID, name string,
// instance data reference
// connects preset from schema to record from instance
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
INSERT INTO instance.preset_record (preset_id, record_id_wofk)
VALUES ($1,0)
`, id); err != nil {
@@ -134,7 +132,7 @@ func Set_tx(tx pgx.Tx, relationId uuid.UUID, id uuid.UUID, name string,
}
// set new preset values
- if err := setValues_tx(tx, id, values); err != nil {
+ if err := setValues_tx(ctx, tx, relationId, id, values); err != nil {
return err
}
@@ -145,7 +143,7 @@ func Set_tx(tx pgx.Tx, relationId uuid.UUID, id uuid.UUID, name string,
var recordExists bool = false
var fullRelName = fmt.Sprintf(`"%s"."%s"`, modName, relName)
- if err := tx.QueryRow(db.Ctx, fmt.Sprintf(`
+ if err := tx.QueryRow(ctx, fmt.Sprintf(`
SELECT record_id_wofk, EXISTS(
SELECT FROM %s
WHERE "%s" = record_id_wofk
@@ -159,7 +157,7 @@ func Set_tx(tx pgx.Tx, relationId uuid.UUID, id uuid.UUID, name string,
if recordExists {
// update preset record if available
- if err := setRecord_tx(tx, relationId, id, recordId, values, fullRelName); err != nil {
+ if err := setRecord_tx(ctx, tx, id, recordId, values, fullRelName); err != nil {
return err
}
@@ -168,7 +166,7 @@ func Set_tx(tx pgx.Tx, relationId uuid.UUID, id uuid.UUID, name string,
// * it did not exist before or
// * it did exist, but not anymore and is currently a protected preset
// (preset record was deleted before it was protected)
- if err := setRecord_tx(tx, relationId, id, 0, values, fullRelName); err != nil {
+ if err := setRecord_tx(ctx, tx, id, 0, values, fullRelName); err != nil {
return err
}
}
@@ -176,14 +174,14 @@ func Set_tx(tx pgx.Tx, relationId uuid.UUID, id uuid.UUID, name string,
}
// preset values
-func getValues(presetId uuid.UUID) ([]types.PresetValue, error) {
+func getValues_tx(ctx context.Context, tx pgx.Tx, presetId uuid.UUID) ([]types.PresetValue, error) {
values := make([]types.PresetValue, 0)
- rows, err := db.Pool.Query(db.Ctx, `
+ rows, err := tx.Query(ctx, `
SELECT id, preset_id, preset_id_refer, attribute_id, protected, value
FROM app.preset_value
WHERE preset_id = $1
- ORDER BY attribute_id ASC -- an order is required for hash comparisson (module changes)
+ ORDER BY attribute_id ASC -- an order is required for hash comparison (module changes)
-- we use attribute ID for better value preview in builder UI
`, presetId)
if err != nil {
@@ -193,9 +191,7 @@ func getValues(presetId uuid.UUID) ([]types.PresetValue, error) {
for rows.Next() {
var v types.PresetValue
- if err := rows.Scan(&v.Id, &v.PresetId, &v.PresetIdRefer, &v.AttributeId,
- &v.Protected, &v.Value); err != nil {
-
+ if err := rows.Scan(&v.Id, &v.PresetId, &v.PresetIdRefer, &v.AttributeId, &v.Protected, &v.Value); err != nil {
return values, err
}
values = append(values, v)
@@ -203,10 +199,10 @@ func getValues(presetId uuid.UUID) ([]types.PresetValue, error) {
return values, nil
}
-func setValues_tx(tx pgx.Tx, presetId uuid.UUID, values []types.PresetValue) error {
+func setValues_tx(ctx context.Context, tx pgx.Tx, relationId uuid.UUID, presetId uuid.UUID, values []types.PresetValue) error {
// delete old preset values
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
DELETE FROM app.preset_value
WHERE preset_id = $1
`, presetId); err != nil {
@@ -224,7 +220,21 @@ func setValues_tx(tx pgx.Tx, presetId uuid.UUID, values []types.PresetValue) err
}
}
- if _, err := tx.Exec(db.Ctx, `
+ // make sure that preset values belong to the correct relation
+ var relationIdAtr uuid.UUID
+ if err := tx.QueryRow(ctx, `
+ SELECT relation_id
+ FROM app.attribute
+ WHERE id = $1
+ `, value.AttributeId).Scan(&relationIdAtr); err != nil {
+ return err
+ }
+
+ if relationIdAtr.String() != relationId.String() {
+ return fmt.Errorf("cannot save preset values, at least 1 attribute value is from a different relation")
+ }
+
+ if _, err := tx.Exec(ctx, `
INSERT INTO app.preset_value (id, preset_id,
preset_id_refer, attribute_id, protected, value)
VALUES ($1,$2,$3,$4,$5,$6)
@@ -239,8 +249,7 @@ func setValues_tx(tx pgx.Tx, presetId uuid.UUID, values []types.PresetValue) err
// set preset record
// returns whether record could be created/updated
-func setRecord_tx(tx pgx.Tx, relationId uuid.UUID, presetId uuid.UUID, recordId int64,
- values []types.PresetValue, fullRelName string) error {
+func setRecord_tx(ctx context.Context, tx pgx.Tx, presetId uuid.UUID, recordId int64, values []types.PresetValue, fullRelName string) error {
sqlRefs := make([]string, 0)
sqlNames := make([]string, 0)
@@ -251,43 +260,44 @@ func setRecord_tx(tx pgx.Tx, relationId uuid.UUID, presetId uuid.UUID, recordId
for _, value := range values {
// only update existing values if they are protected
- // unprotected values can be overwritten by customer
- // in effect, unprotected values work as one-time-only values (pre-filled data)
+ // unprotected values can be overwritten by customer (one-time-only values, like pre-filled or example data)
if !isNew && !value.Protected {
continue
}
- atrName, err := schema.GetAttributeNameById_tx(tx, value.AttributeId)
+ _, _, atrName, atrContent, err := schema.GetAttributeDetailsById_tx(ctx, tx, value.AttributeId)
if err != nil {
return err
}
- // check for fixed value
- if !value.PresetIdRefer.Valid {
-
- if value.Value == "" {
- // no value set, ignore
- continue
- }
- sqlNames = append(sqlNames, fmt.Sprintf(`"%s"`, atrName))
- sqlValues = append(sqlValues, value.Value)
+ if schema.IsContentFiles(atrContent) {
+ // files cannot be applied via presets
continue
}
- // use refered preset record ID as value
- recordId, exists, err := getRecordIdByReferal_tx(tx, value.PresetIdRefer.Bytes)
- if err != nil {
- return err
- }
+ sqlNames = append(sqlNames, fmt.Sprintf(`"%s"`, atrName))
- // if refered record does not exist, do not set record
- // otherwise potential NOT NULL constraint would be breached
- if !exists {
- return fmt.Errorf("referenced preset '%s' does not exist",
- uuid.FromBytesOrNil(value.PresetIdRefer.Bytes[:]))
+ if schema.IsContentRelationship(atrContent) {
+ if value.PresetIdRefer.Valid {
+ // use referred preset record ID as value
+ recordIdRefer, exists, err := getRecordIdByReferal_tx(ctx, tx, value.PresetIdRefer.Bytes)
+ if err != nil {
+ return err
+ }
+
+ // if referred record does not exist, do not set record
+ // otherwise potential NOT NULL constraint would be breached
+ if !exists {
+ return fmt.Errorf("referenced preset '%s' does not exist", value.PresetIdRefer.String())
+ }
+
+ sqlValues = append(sqlValues, recordIdRefer)
+ } else {
+ sqlValues = append(sqlValues, nil)
+ }
+ } else {
+ sqlValues = append(sqlValues, compatible.FixPresetNull(value.Value))
}
- sqlNames = append(sqlNames, fmt.Sprintf(`"%s"`, atrName))
- sqlValues = append(sqlValues, recordId)
}
if isNew {
@@ -295,7 +305,7 @@ func setRecord_tx(tx pgx.Tx, relationId uuid.UUID, presetId uuid.UUID, recordId
sqlRefs = append(sqlRefs, fmt.Sprintf(`$%d`, i+1))
}
- if err := tx.QueryRow(db.Ctx, fmt.Sprintf(`
+ if err := tx.QueryRow(ctx, fmt.Sprintf(`
INSERT INTO %s (%s)
VALUES (%s)
RETURNING "%s"
@@ -309,7 +319,7 @@ func setRecord_tx(tx pgx.Tx, relationId uuid.UUID, presetId uuid.UUID, recordId
}
// connect instance record ID to preset
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
UPDATE instance.preset_record
SET record_id_wofk = $1
WHERE preset_id = $2
@@ -325,7 +335,7 @@ func setRecord_tx(tx pgx.Tx, relationId uuid.UUID, presetId uuid.UUID, recordId
refId := fmt.Sprintf("$%d", len(sqlRefs)+1)
sqlValues = append(sqlValues, recordId)
- if _, err := tx.Exec(db.Ctx, fmt.Sprintf(`
+ if _, err := tx.Exec(ctx, fmt.Sprintf(`
UPDATE %s
SET %s
WHERE "%s" = %s
@@ -342,21 +352,21 @@ func setRecord_tx(tx pgx.Tx, relationId uuid.UUID, presetId uuid.UUID, recordId
return nil
}
-// get ID of refered preset record
-// returns record ID and whether refered record actually exists
+// get ID of referred preset record
+// returns record ID and whether referred record actually exists
// (unprotected preset record can get deleted)
-func getRecordIdByReferal_tx(tx pgx.Tx, presetId uuid.UUID) (int64, bool, error) {
+func getRecordIdByReferal_tx(ctx context.Context, tx pgx.Tx, presetId uuid.UUID) (int64, bool, error) {
var recordId int64
var relName string
var modName string
- if err := tx.QueryRow(db.Ctx, `
+ if err := tx.QueryRow(ctx, `
SELECT pr.record_id_wofk, r.name, m.name
FROM instance.preset_record AS pr
- INNER JOIN app.preset AS p ON p.id = pr.preset_id
+ INNER JOIN app.preset AS p ON p.id = pr.preset_id
INNER JOIN app.relation AS r ON r.id = p.relation_id
- INNER JOIN app.module AS m ON m.id = r.module_id
+ INNER JOIN app.module AS m ON m.id = r.module_id
WHERE pr.preset_id = $1
`, presetId).Scan(&recordId, &relName, &modName); err != nil && err != pgx.ErrNoRows {
return 0, false, err
@@ -369,7 +379,7 @@ func getRecordIdByReferal_tx(tx pgx.Tx, presetId uuid.UUID) (int64, bool, error)
// check whether preset record actually exist (might have been deleted)
exists := false
- if err := tx.QueryRow(db.Ctx, fmt.Sprintf(`
+ if err := tx.QueryRow(ctx, fmt.Sprintf(`
SELECT EXISTS (
SELECT FROM "%s"."%s"
WHERE id = $1
diff --git a/schema/query/query.go b/schema/query/query.go
index 8e108e93..36f3798a 100644
--- a/schema/query/query.go
+++ b/schema/query/query.go
@@ -1,13 +1,12 @@
package query
import (
+ "context"
"errors"
"fmt"
- "r3/db"
- "r3/schema"
"r3/schema/caption"
- "r3/tools"
"r3/types"
+ "slices"
"github.com/gofrs/uuid"
"github.com/jackc/pgx/v5"
@@ -16,7 +15,7 @@ import (
var allowedEntities = []string{"api", "form", "field", "collection", "column", "query_filter_query"}
-func Get(entity string, id uuid.UUID, filterPosition int, filterSide int) (types.Query, error) {
+func Get_tx(ctx context.Context, tx pgx.Tx, entity string, id uuid.UUID, filterIndex int, filterPosition int, filterSide int) (types.Query, error) {
var q types.Query
q.Joins = make([]types.QueryJoin, 0)
@@ -25,7 +24,7 @@ func Get(entity string, id uuid.UUID, filterPosition int, filterSide int) (types
q.Lookups = make([]types.QueryLookup, 0)
q.Choices = make([]types.QueryChoice, 0)
- if !tools.StringInSlice(entity, allowedEntities) {
+ if !slices.Contains(allowedEntities, entity) {
return q, errors.New("bad entity")
}
@@ -33,12 +32,13 @@ func Get(entity string, id uuid.UUID, filterPosition int, filterSide int) (types
filterClause := ""
if entity == "query_filter_query" {
filterClause = fmt.Sprintf(`
+ AND query_filter_index = %d
AND query_filter_position = %d
- AND query_filter_side = %d
- `, filterPosition, filterSide)
+ AND query_filter_side = %d
+ `, filterIndex, filterPosition, filterSide)
}
- err := db.Pool.QueryRow(db.Ctx, fmt.Sprintf(`
+ err := tx.QueryRow(ctx, fmt.Sprintf(`
SELECT id, relation_id, fixed_limit
FROM app.query
WHERE %s_id = $1
@@ -55,7 +55,7 @@ func Get(entity string, id uuid.UUID, filterPosition int, filterSide int) (types
}
// retrieve joins
- rows, err := db.Pool.Query(db.Ctx, `
+ rows, err := tx.Query(ctx, `
SELECT relation_id, attribute_id, index_from, index, connector,
apply_create, apply_update, apply_delete
FROM app.query_join
@@ -65,29 +65,27 @@ func Get(entity string, id uuid.UUID, filterPosition int, filterSide int) (types
if err != nil {
return q, err
}
+ defer rows.Close()
for rows.Next() {
var j types.QueryJoin
- if err := rows.Scan(&j.RelationId, &j.AttributeId, &j.IndexFrom,
- &j.Index, &j.Connector, &j.ApplyCreate, &j.ApplyUpdate,
- &j.ApplyDelete); err != nil {
+ if err := rows.Scan(&j.RelationId, &j.AttributeId, &j.IndexFrom, &j.Index,
+ &j.Connector, &j.ApplyCreate, &j.ApplyUpdate, &j.ApplyDelete); err != nil {
- rows.Close()
return q, err
}
q.Joins = append(q.Joins, j)
}
- rows.Close()
// retrieve filters
- q.Filters, err = getFilters(q.Id, pgtype.UUID{})
+ q.Filters, err = getFilters_tx(ctx, tx, q.Id, pgtype.UUID{})
if err != nil {
return q, err
}
// retrieve orderings
- rows, err = db.Pool.Query(db.Ctx, `
+ rows, err = tx.Query(ctx, `
SELECT attribute_id, index, ascending
FROM app.query_order
WHERE query_id = $1
@@ -96,20 +94,19 @@ func Get(entity string, id uuid.UUID, filterPosition int, filterSide int) (types
if err != nil {
return q, err
}
+ defer rows.Close()
for rows.Next() {
var o types.QueryOrder
if err := rows.Scan(&o.AttributeId, &o.Index, &o.Ascending); err != nil {
- rows.Close()
return q, err
}
q.Orders = append(q.Orders, o)
}
- rows.Close()
// retrieve lookups
- rows, err = db.Pool.Query(db.Ctx, `
+ rows, err = tx.Query(ctx, `
SELECT pg_index_id, index
FROM app.query_lookup
WHERE query_id = $1
@@ -118,20 +115,19 @@ func Get(entity string, id uuid.UUID, filterPosition int, filterSide int) (types
if err != nil {
return q, err
}
+ defer rows.Close()
for rows.Next() {
var l types.QueryLookup
if err := rows.Scan(&l.PgIndexId, &l.Index); err != nil {
- rows.Close()
return q, err
}
q.Lookups = append(q.Lookups, l)
}
- rows.Close()
// retrieve choices
- rows, err = db.Pool.Query(db.Ctx, `
+ rows, err = tx.Query(ctx, `
SELECT id, name
FROM app.query_choice
WHERE query_id = $1
@@ -140,28 +136,25 @@ func Get(entity string, id uuid.UUID, filterPosition int, filterSide int) (types
if err != nil {
return q, err
}
+ defer rows.Close()
for rows.Next() {
var c types.QueryChoice
if err := rows.Scan(&c.Id, &c.Name); err != nil {
- rows.Close()
return q, err
}
q.Choices = append(q.Choices, c)
}
- rows.Close()
for i, c := range q.Choices {
- c.Filters, err = getFilters(q.Id, pgtype.UUID{Bytes: c.Id, Valid: true})
+ c.Filters, err = getFilters_tx(ctx, tx, q.Id, pgtype.UUID{Bytes: c.Id, Valid: true})
if err != nil {
- rows.Close()
return q, err
}
- c.Captions, err = caption.Get("query_choice", c.Id, []string{"queryChoiceTitle"})
+ c.Captions, err = caption.Get_tx(ctx, tx, "query_choice", c.Id, []string{"queryChoiceTitle"})
if err != nil {
- rows.Close()
return q, err
}
q.Choices[i] = c
@@ -169,79 +162,94 @@ func Get(entity string, id uuid.UUID, filterPosition int, filterSide int) (types
return q, nil
}
-func Set_tx(tx pgx.Tx, entity string, entityId uuid.UUID, filterPosition int,
- filterSide int, query types.Query) error {
+func Set_tx(ctx context.Context, tx pgx.Tx, entity string, entityId uuid.UUID, filterIndex int,
+ filterPosition int, filterSide int, query types.Query) error {
- if !tools.StringInSlice(entity, allowedEntities) {
+ if !slices.Contains(allowedEntities, entity) {
return fmt.Errorf("unknown query parent entity '%s'", entity)
}
- // sub query (via query filter) requires second element for key
var err error
- known := false
+ createNew := false
+ noBaseRelation := !query.RelationId.Valid
subQuery := entity == "query_filter_query"
- if !subQuery {
- known, err = schema.CheckCreateId_tx(tx, &entityId, "query", fmt.Sprintf("%s_id", entity))
+ // check if its a new query, old query (for the same entity) still needs to be checked as it could have been remade
+ if query.Id == uuid.Nil {
+ query.Id, err = uuid.NewV4()
if err != nil {
return err
}
+ createNew = true
+ }
+
+ // check whether a query for the parent entity already exists
+ var queryIdExisting pgtype.UUID
+
+ if !subQuery {
+ if err := tx.QueryRow(ctx, fmt.Sprintf(`
+ SELECT id
+ FROM app.query
+ WHERE %s_id = $1
+ `, entity), entityId).Scan(&queryIdExisting); err != nil && err != pgx.ErrNoRows {
+ return err
+ }
} else {
- if err := tx.QueryRow(db.Ctx, `
- SELECT EXISTS(
- SELECT id
- FROM app.query
- WHERE query_filter_query_id = $1
- AND query_filter_position = $2
- AND query_filter_side = $3
- )
- `, entityId, filterPosition, filterSide).Scan(&known); err != nil {
+ if err := tx.QueryRow(ctx, `
+ SELECT id
+ FROM app.query
+ WHERE query_filter_query_id = $1
+ AND query_filter_index = $2
+ AND query_filter_position = $3
+ AND query_filter_side = $4
+ `, entityId, filterIndex, filterPosition, filterSide).Scan(&queryIdExisting); err != nil && err != pgx.ErrNoRows {
return err
}
}
- // query without a base relation is not used and therefore not needed
- if !query.RelationId.Valid {
- if known {
- if _, err := tx.Exec(db.Ctx, `
+ if !queryIdExisting.Valid {
+ // query does not exist, create
+ createNew = true
+ } else {
+ // query exists - delete if it was remade (different ID) or is not required anymore (query without a base relation)
+ if query.Id.String() != queryIdExisting.String() || noBaseRelation {
+ if _, err := tx.Exec(ctx, `
DELETE FROM app.query
WHERE id = $1
- `, query.Id); err != nil {
+ `, queryIdExisting); err != nil {
return err
}
+ createNew = true
}
- return nil
}
- if !known {
- if query.Id == uuid.Nil {
- query.Id, err = uuid.NewV4()
- if err != nil {
- return err
- }
- }
+ if noBaseRelation {
+ // no query needed
+ return nil
+ }
+ // create or update query
+ if createNew {
if !subQuery {
- if _, err := tx.Exec(db.Ctx, fmt.Sprintf(`
+ if _, err := tx.Exec(ctx, fmt.Sprintf(`
INSERT INTO app.query (id, relation_id, fixed_limit, %s_id)
VALUES ($1,$2,$3,$4)
`, entity), query.Id, query.RelationId, query.FixedLimit, entityId); err != nil {
return err
}
} else {
- if _, err := tx.Exec(db.Ctx, `
- INSERT INTO app.query (id, relation_id, fixed_limit,
- query_filter_query_id, query_filter_position,
- query_filter_side)
- VALUES ($1,$2,$3,$4,$5,$6)
+ if _, err := tx.Exec(ctx, `
+ INSERT INTO app.query (id, relation_id, fixed_limit, query_filter_query_id,
+ query_filter_index, query_filter_position, query_filter_side)
+ VALUES ($1,$2,$3,$4,$5,$6,$7)
`, query.Id, query.RelationId, query.FixedLimit, entityId,
- filterPosition, filterSide); err != nil {
+ filterIndex, filterPosition, filterSide); err != nil {
return err
}
}
} else {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
UPDATE app.query
SET relation_id = $1, fixed_limit = $2
WHERE id = $3
@@ -251,7 +259,7 @@ func Set_tx(tx pgx.Tx, entity string, entityId uuid.UUID, filterPosition int,
}
// reset joins
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
DELETE FROM app.query_join
WHERE query_id = $1
`, query.Id); err != nil {
@@ -260,11 +268,11 @@ func Set_tx(tx pgx.Tx, entity string, entityId uuid.UUID, filterPosition int,
for position, j := range query.Joins {
- if !tools.StringInSlice(j.Connector, types.QueryJoinConnectors) {
+ if !slices.Contains(types.QueryJoinConnectors, j.Connector) {
return errors.New("invalid join connector")
}
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
INSERT INTO app.query_join (
query_id, relation_id, attribute_id, position, index_from,
index, connector, apply_create, apply_update, apply_delete
@@ -279,18 +287,18 @@ func Set_tx(tx pgx.Tx, entity string, entityId uuid.UUID, filterPosition int,
}
// reset filters
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
DELETE FROM app.query_filter
WHERE query_id = $1
`, query.Id); err != nil {
return err
}
- if err := setFilters_tx(tx, query.Id, pgtype.UUID{}, query.Filters, 0); err != nil {
+ if err := setFilters_tx(ctx, tx, query.Id, pgtype.UUID{}, query.Filters, 0); err != nil {
return err
}
// reset ordering
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
DELETE FROM app.query_order
WHERE query_id = $1
`, query.Id); err != nil {
@@ -299,7 +307,7 @@ func Set_tx(tx pgx.Tx, entity string, entityId uuid.UUID, filterPosition int,
for position, o := range query.Orders {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
INSERT INTO app.query_order (
query_id, attribute_id, position, index, ascending
)
@@ -310,7 +318,7 @@ func Set_tx(tx pgx.Tx, entity string, entityId uuid.UUID, filterPosition int,
}
// reset lookups
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
DELETE FROM app.query_lookup
WHERE query_id = $1
`, query.Id); err != nil {
@@ -319,7 +327,7 @@ func Set_tx(tx pgx.Tx, entity string, entityId uuid.UUID, filterPosition int,
for _, l := range query.Lookups {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
INSERT INTO app.query_lookup (query_id, pg_index_id, index)
VALUES ($1,$2,$3)
`, query.Id, l.PgIndexId, l.Index); err != nil {
@@ -328,7 +336,7 @@ func Set_tx(tx pgx.Tx, entity string, entityId uuid.UUID, filterPosition int,
}
// reset choices
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
DELETE FROM app.query_choice
WHERE query_id = $1
`, query.Id); err != nil {
@@ -344,7 +352,7 @@ func Set_tx(tx pgx.Tx, entity string, entityId uuid.UUID, filterPosition int,
}
}
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
INSERT INTO app.query_choice (id, query_id, name, position)
VALUES ($1,$2,$3,$4)
`, c.Id, query.Id, c.Name, position); err != nil {
@@ -356,19 +364,19 @@ func Set_tx(tx pgx.Tx, entity string, entityId uuid.UUID, filterPosition int,
// (necessary as query ID + position is used as PK
positionOffset := (position + 1) * 100
- if err := setFilters_tx(tx, query.Id, pgtype.UUID{Bytes: c.Id, Valid: true},
+ if err := setFilters_tx(ctx, tx, query.Id, pgtype.UUID{Bytes: c.Id, Valid: true},
c.Filters, positionOffset); err != nil {
return err
}
- if err := caption.Set_tx(tx, c.Id, c.Captions); err != nil {
+ if err := caption.Set_tx(ctx, tx, c.Id, c.Captions); err != nil {
return err
}
}
return nil
}
-func getFilters(queryId uuid.UUID, queryChoiceId pgtype.UUID) ([]types.QueryFilter, error) {
+func getFilters_tx(ctx context.Context, tx pgx.Tx, queryId uuid.UUID, queryChoiceId pgtype.UUID) ([]types.QueryFilter, error) {
var filters = make([]types.QueryFilter, 0)
params := make([]interface{}, 0)
@@ -381,8 +389,14 @@ func getFilters(queryId uuid.UUID, queryChoiceId pgtype.UUID) ([]types.QueryFilt
}
// get filters
- rows, err := db.Pool.Query(db.Ctx, fmt.Sprintf(`
- SELECT connector, operator, position
+ type typeFilterPos struct {
+ filter types.QueryFilter
+ position int
+ }
+ filterPos := make([]typeFilterPos, 0)
+
+ rows, err := tx.Query(ctx, fmt.Sprintf(`
+ SELECT connector, operator, index, position
FROM app.query_filter
WHERE query_id = $1
%s
@@ -391,30 +405,24 @@ func getFilters(queryId uuid.UUID, queryChoiceId pgtype.UUID) ([]types.QueryFilt
if err != nil {
return filters, err
}
-
- type typeFilterPos struct {
- filter types.QueryFilter
- position int
- }
- filterPos := make([]typeFilterPos, 0)
+ defer rows.Close()
for rows.Next() {
var fp typeFilterPos
- if err := rows.Scan(&fp.filter.Connector, &fp.filter.Operator, &fp.position); err != nil {
+ if err := rows.Scan(&fp.filter.Connector, &fp.filter.Operator, &fp.filter.Index, &fp.position); err != nil {
return filters, err
}
filterPos = append(filterPos, fp)
}
- rows.Close()
for _, fp := range filterPos {
- fp.filter.Side0, err = getFilterSide(queryId, fp.position, 0)
+ fp.filter.Side0, err = getFilterSide_tx(ctx, tx, queryId, fp.filter.Index, fp.position, 0)
if err != nil {
return filters, err
}
- fp.filter.Side1, err = getFilterSide(queryId, fp.position, 1)
+ fp.filter.Side1, err = getFilterSide_tx(ctx, tx, queryId, fp.filter.Index, fp.position, 1)
if err != nil {
return filters, err
}
@@ -422,28 +430,29 @@ func getFilters(queryId uuid.UUID, queryChoiceId pgtype.UUID) ([]types.QueryFilt
}
return filters, nil
}
-func getFilterSide(queryId uuid.UUID, filterPosition int, side int) (types.QueryFilterSide, error) {
+func getFilterSide_tx(ctx context.Context, tx pgx.Tx, queryId uuid.UUID, filterIndex int, filterPosition int, side int) (types.QueryFilterSide, error) {
var s types.QueryFilterSide
var err error
- if err := db.Pool.QueryRow(db.Ctx, `
+ if err := tx.QueryRow(ctx, `
SELECT attribute_id, attribute_index, attribute_nested, brackets,
collection_id, column_id, content, field_id, now_offset, preset_id,
- role_id, query_aggregator, value
+ role_id, variable_id, query_aggregator, value
FROM app.query_filter_side
- WHERE query_id = $1
- AND query_filter_position = $2
- AND side = $3
- `, queryId, filterPosition, side).Scan(&s.AttributeId, &s.AttributeIndex,
- &s.AttributeNested, &s.Brackets, &s.CollectionId, &s.ColumnId,
- &s.Content, &s.FieldId, &s.NowOffset, &s.PresetId, &s.RoleId,
+ WHERE query_id = $1
+ AND query_filter_index = $2
+ AND query_filter_position = $3
+ AND side = $4
+ `, queryId, filterIndex, filterPosition, side).Scan(&s.AttributeId, &s.AttributeIndex,
+ &s.AttributeNested, &s.Brackets, &s.CollectionId, &s.ColumnId, &s.Content,
+ &s.FieldId, &s.NowOffset, &s.PresetId, &s.RoleId, &s.VariableId,
&s.QueryAggregator, &s.Value); err != nil {
return s, err
}
if s.Content == "subQuery" {
- s.Query, err = Get("query_filter_query", queryId, filterPosition, side)
+ s.Query, err = Get_tx(ctx, tx, "query_filter_query", queryId, filterIndex, filterPosition, side)
if err != nil {
return s, err
}
@@ -453,59 +462,59 @@ func getFilterSide(queryId uuid.UUID, filterPosition int, side int) (types.Query
return s, nil
}
-func setFilters_tx(tx pgx.Tx, queryId uuid.UUID, queryChoiceId pgtype.UUID,
+func setFilters_tx(ctx context.Context, tx pgx.Tx, queryId uuid.UUID, queryChoiceId pgtype.UUID,
filters []types.QueryFilter, positionOffset int) error {
for position, f := range filters {
- if !tools.StringInSlice(f.Connector, types.QueryFilterConnectors) {
+ if !slices.Contains(types.QueryFilterConnectors, f.Connector) {
return errors.New("invalid filter connector")
}
- if !tools.StringInSlice(f.Operator, types.QueryFilterOperators) {
+ if !slices.Contains(types.QueryFilterOperators, f.Operator) {
return errors.New("invalid filter operator")
}
position += positionOffset
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
INSERT INTO app.query_filter (query_id, query_choice_id,
- position, connector, operator)
- VALUES ($1,$2,$3,$4,$5)
- `, queryId, queryChoiceId, position, f.Connector, f.Operator); err != nil {
+ index, position, connector, operator)
+ VALUES ($1,$2,$3,$4,$5,$6)
+ `, queryId, queryChoiceId, f.Index, position, f.Connector, f.Operator); err != nil {
return err
}
- if err := SetFilterSide_tx(tx, queryId, position, 0, f.Side0); err != nil {
+ if err := SetFilterSide_tx(ctx, tx, queryId, f.Index, position, 0, f.Side0); err != nil {
return err
}
- if err := SetFilterSide_tx(tx, queryId, position, 1, f.Side1); err != nil {
+ if err := SetFilterSide_tx(ctx, tx, queryId, f.Index, position, 1, f.Side1); err != nil {
return err
}
}
return nil
}
-func SetFilterSide_tx(tx pgx.Tx, queryId uuid.UUID, filterPosition int,
- side int, s types.QueryFilterSide) error {
+func SetFilterSide_tx(ctx context.Context, tx pgx.Tx, queryId uuid.UUID, filterIndex int,
+ filterPosition int, side int, s types.QueryFilterSide) error {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
INSERT INTO app.query_filter_side (
- query_id, query_filter_position, side, attribute_id,
- attribute_index, attribute_nested, brackets, collection_id,
- column_id, content, field_id, now_offset, preset_id, role_id,
+ query_id, query_filter_index, query_filter_position, side, attribute_id,
+ attribute_index, attribute_nested, brackets, collection_id, column_id,
+ content, field_id, now_offset, preset_id, role_id, variable_id,
query_aggregator, value
)
- VALUES ($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13,$14,$15,$16)
- `, queryId, filterPosition, side, s.AttributeId, s.AttributeIndex,
+ VALUES ($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13,$14,$15,$16,$17,$18)
+ `, queryId, filterIndex, filterPosition, side, s.AttributeId, s.AttributeIndex,
s.AttributeNested, s.Brackets, s.CollectionId, s.ColumnId, s.Content,
- s.FieldId, s.NowOffset, s.PresetId, s.RoleId, s.QueryAggregator,
- s.Value); err != nil {
+ s.FieldId, s.NowOffset, s.PresetId, s.RoleId, s.VariableId,
+ s.QueryAggregator, s.Value); err != nil {
return err
}
if s.Content == "subQuery" {
- if err := Set_tx(tx, "query_filter_query", queryId,
+ if err := Set_tx(ctx, tx, "query_filter_query", queryId, filterIndex,
filterPosition, side, s.Query); err != nil {
return err
diff --git a/schema/relation/relation.go b/schema/relation/relation.go
index 0ee5d3c1..2d6f975e 100644
--- a/schema/relation/relation.go
+++ b/schema/relation/relation.go
@@ -1,8 +1,8 @@
package relation
import (
+ "context"
"fmt"
- "r3/db"
"r3/db/check"
"r3/schema"
"r3/schema/attribute"
@@ -14,15 +14,15 @@ import (
"github.com/jackc/pgx/v5/pgtype"
)
-func Del_tx(tx pgx.Tx, id uuid.UUID) error {
+func Del_tx(ctx context.Context, tx pgx.Tx, id uuid.UUID) error {
- modName, relName, err := schema.GetRelationNamesById_tx(tx, id)
+ modName, relName, err := schema.GetRelationNamesById_tx(ctx, tx, id)
if err != nil {
return err
}
// drop e2e encryption relation if its there
- if _, err := tx.Exec(db.Ctx, fmt.Sprintf(`
+ if _, err := tx.Exec(ctx, fmt.Sprintf(`
DROP TABLE IF EXISTS instance_e2ee."%s"
`, schema.GetEncKeyTableName(id))); err != nil {
return err
@@ -30,7 +30,7 @@ func Del_tx(tx pgx.Tx, id uuid.UUID) error {
// delete file relations for file attributes
atrIdsFile := make([]uuid.UUID, 0)
- if err := db.Pool.QueryRow(db.Ctx, `
+ if err := tx.QueryRow(ctx, `
SELECT ARRAY_AGG(id)
FROM app.attribute
WHERE relation_id = $1
@@ -40,13 +40,17 @@ func Del_tx(tx pgx.Tx, id uuid.UUID) error {
}
for _, atrId := range atrIdsFile {
- if err := attribute.FileRelationsDelete_tx(tx, atrId); err != nil {
+ if err := attribute.FileRelationsDelete_tx(ctx, tx, atrId); err != nil {
return err
}
}
// drop relation
- if _, err := tx.Exec(db.Ctx, fmt.Sprintf(`DROP TABLE "%s"."%s"`,
+ // CASCADE is relevant if relation is deleted together with other elements during transfer (import)
+ // issue can occur if deletion order is wrong (relation deleted before referencing relationship attribute)
+ // CASCADE removes the foreign key from the affected attribute - then either the attribute or its relation is deleted afterwards during the transfer
+ // invalid CASCADE is blocked by the system as referenced relations cannot be deleted in the first place
+ if _, err := tx.Exec(ctx, fmt.Sprintf(`DROP TABLE "%s"."%s" CASCADE`,
modName, relName)); err != nil {
return err
@@ -54,25 +58,25 @@ func Del_tx(tx pgx.Tx, id uuid.UUID) error {
// delete primary key sequence
// (is not removed automatically)
- if err := delPkSeq_tx(tx, modName, id); err != nil {
+ if err := delPkSeq_tx(ctx, tx, modName, id); err != nil {
return err
}
// delete relation reference
- _, err = tx.Exec(db.Ctx, `DELETE FROM app.relation WHERE id = $1`, id)
+ _, err = tx.Exec(ctx, `DELETE FROM app.relation WHERE id = $1`, id)
return err
}
-func delPkSeq_tx(tx pgx.Tx, modName string, id uuid.UUID) error {
- _, err := tx.Exec(db.Ctx, fmt.Sprintf(`
+func delPkSeq_tx(ctx context.Context, tx pgx.Tx, modName string, id uuid.UUID) error {
+ _, err := tx.Exec(ctx, fmt.Sprintf(`
DROP SEQUENCE "%s"."%s"
`, modName, schema.GetSequenceName(id)))
return err
}
-func Get(moduleId uuid.UUID) ([]types.Relation, error) {
+func Get_tx(ctx context.Context, tx pgx.Tx, moduleId uuid.UUID) ([]types.Relation, error) {
relations := make([]types.Relation, 0)
- rows, err := db.Pool.Query(db.Ctx, `
+ rows, err := tx.Query(ctx, `
SELECT id, name, comment, encryption, retention_count, retention_days, (
SELECT id
FROM app.attribute
@@ -97,42 +101,44 @@ func Get(moduleId uuid.UUID) ([]types.Relation, error) {
}
r.ModuleId = moduleId
r.Attributes = make([]types.Attribute, 0)
+ r.Triggers = make([]types.PgTrigger, 0)
+ relations = append(relations, r)
+ }
- r.Policies, err = getPolicies(r.Id)
+ for i, r := range relations {
+ relations[i].Policies, err = getPolicies_tx(ctx, tx, r.Id)
if err != nil {
return relations, err
}
-
- relations = append(relations, r)
}
return relations, nil
}
-func Set_tx(tx pgx.Tx, rel types.Relation) error {
+func Set_tx(ctx context.Context, tx pgx.Tx, rel types.Relation) error {
if err := check.DbIdentifier(rel.Name); err != nil {
return err
}
- moduleName, err := schema.GetModuleNameById_tx(tx, rel.ModuleId)
+ moduleName, err := schema.GetModuleNameById_tx(ctx, tx, rel.ModuleId)
if err != nil {
return err
}
isNew := rel.Id == uuid.Nil
- known, err := schema.CheckCreateId_tx(tx, &rel.Id, "relation", "id")
+ known, err := schema.CheckCreateId_tx(ctx, tx, &rel.Id, "relation", "id")
if err != nil {
return err
}
if known {
- _, nameEx, err := schema.GetRelationNamesById_tx(tx, rel.Id)
+ _, nameEx, err := schema.GetRelationNamesById_tx(ctx, tx, rel.Id)
if err != nil {
return err
}
// update relation reference
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
UPDATE app.relation
SET name = $1, comment = $2, retention_count = $3, retention_days = $4
WHERE id = $5
@@ -142,26 +148,26 @@ func Set_tx(tx pgx.Tx, rel types.Relation) error {
// if name changed, update relation and all affected entities
if nameEx != rel.Name {
- if _, err := tx.Exec(db.Ctx, fmt.Sprintf(`
+ if _, err := tx.Exec(ctx, fmt.Sprintf(`
ALTER TABLE "%s"."%s"
RENAME TO "%s"
`, moduleName, nameEx, rel.Name)); err != nil {
return err
}
- if err := pgFunction.RecreateAffectedBy_tx(tx, "relation", rel.Id); err != nil {
+ if err := pgFunction.RecreateAffectedBy_tx(ctx, tx, "relation", rel.Id); err != nil {
return fmt.Errorf("failed to recreate affected PG functions, %s", err)
}
}
} else {
- if _, err := tx.Exec(db.Ctx, fmt.Sprintf(`
+ if _, err := tx.Exec(ctx, fmt.Sprintf(`
CREATE TABLE "%s"."%s" ()
`, moduleName, rel.Name)); err != nil {
return err
}
// insert relation reference
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
INSERT INTO app.relation (id, module_id, name, comment,
encryption, retention_count, retention_days)
VALUES ($1,$2,$3,$4,$5,$6,$7)
@@ -173,15 +179,27 @@ func Set_tx(tx pgx.Tx, rel types.Relation) error {
// create primary key attribute if relation is new (e. g. not imported or updated)
if isNew {
- if err := attribute.Set_tx(tx, rel.Id, uuid.Nil,
- pgtype.UUID{}, pgtype.UUID{}, schema.PkName, "integer", "default",
- 0, false, false, "", "", "", types.CaptionMap{}); err != nil {
-
+ if err := attribute.Set_tx(ctx, tx, types.Attribute{
+ Id: uuid.Nil,
+ RelationId: rel.Id,
+ RelationshipId: pgtype.UUID{},
+ IconId: pgtype.UUID{},
+ Name: schema.PkName,
+ Content: "integer",
+ ContentUse: "default",
+ Length: 0,
+ Nullable: false,
+ Encrypted: false,
+ Def: "",
+ OnUpdate: "",
+ OnDelete: "",
+ Captions: types.CaptionMap{},
+ }); err != nil {
return err
}
}
}
// set policies
- return setPolicies_tx(tx, rel.Id, rel.Policies)
+ return setPolicies_tx(ctx, tx, rel.Id, rel.Policies)
}
diff --git a/schema/relation/relationPolicy.go b/schema/relation/relationPolicy.go
index c7cbdbcd..1061714c 100644
--- a/schema/relation/relationPolicy.go
+++ b/schema/relation/relationPolicy.go
@@ -1,26 +1,25 @@
package relation
import (
- "r3/db"
+ "context"
"r3/types"
"github.com/gofrs/uuid"
"github.com/jackc/pgx/v5"
)
-func delPolicies_tx(tx pgx.Tx, relationId uuid.UUID) error {
- _, err := tx.Exec(db.Ctx, `
+func delPolicies_tx(ctx context.Context, tx pgx.Tx, relationId uuid.UUID) error {
+ _, err := tx.Exec(ctx, `
DELETE FROM app.relation_policy
WHERE relation_id = $1
`, relationId)
return err
}
-func getPolicies(relationId uuid.UUID) ([]types.RelationPolicy, error) {
-
+func getPolicies_tx(ctx context.Context, tx pgx.Tx, relationId uuid.UUID) ([]types.RelationPolicy, error) {
policies := make([]types.RelationPolicy, 0)
- rows, err := db.Pool.Query(db.Ctx, `
+ rows, err := tx.Query(ctx, `
SELECT role_id, pg_function_id_excl, pg_function_id_incl,
action_delete, action_select, action_update
FROM app.relation_policy
@@ -35,9 +34,8 @@ func getPolicies(relationId uuid.UUID) ([]types.RelationPolicy, error) {
for rows.Next() {
var p types.RelationPolicy
- if err := rows.Scan(&p.RoleId, &p.PgFunctionIdExcl,
- &p.PgFunctionIdIncl, &p.ActionDelete, &p.ActionSelect,
- &p.ActionUpdate); err != nil {
+ if err := rows.Scan(&p.RoleId, &p.PgFunctionIdExcl, &p.PgFunctionIdIncl,
+ &p.ActionDelete, &p.ActionSelect, &p.ActionUpdate); err != nil {
return policies, err
}
@@ -46,14 +44,14 @@ func getPolicies(relationId uuid.UUID) ([]types.RelationPolicy, error) {
return policies, nil
}
-func setPolicies_tx(tx pgx.Tx, relationId uuid.UUID, policies []types.RelationPolicy) error {
+func setPolicies_tx(ctx context.Context, tx pgx.Tx, relationId uuid.UUID, policies []types.RelationPolicy) error {
- if err := delPolicies_tx(tx, relationId); err != nil {
+ if err := delPolicies_tx(ctx, tx, relationId); err != nil {
return err
}
for i, p := range policies {
- _, err := tx.Exec(db.Ctx, `
+ _, err := tx.Exec(ctx, `
INSERT INTO app.relation_policy (
relation_id, position, role_id,
pg_function_id_excl, pg_function_id_incl,
diff --git a/schema/relation/relationPreview.go b/schema/relation/relationPreview.go
index c8621bb8..d36d6017 100644
--- a/schema/relation/relationPreview.go
+++ b/schema/relation/relationPreview.go
@@ -1,15 +1,16 @@
package relation
import (
+ "context"
"fmt"
- "r3/db"
"r3/schema"
"strings"
"github.com/gofrs/uuid"
+ "github.com/jackc/pgx/v5"
)
-func GetPreview(id uuid.UUID, limit int, offset int) (interface{}, error) {
+func GetPreview(ctx context.Context, tx pgx.Tx, id uuid.UUID, limit int, offset int) (interface{}, error) {
var modName, relName string
atrNames := make([]string, 0)
@@ -23,7 +24,7 @@ func GetPreview(id uuid.UUID, limit int, offset int) (interface{}, error) {
}
// get relation/attribute/module details
- if err := db.Pool.QueryRow(db.Ctx, `
+ if err := tx.QueryRow(ctx, `
SELECT r.name, m.name, ARRAY(
SELECT name
FROM app.attribute
@@ -38,8 +39,8 @@ func GetPreview(id uuid.UUID, limit int, offset int) (interface{}, error) {
return nil, err
}
- // get total count of tupels from relation
- if err := db.Pool.QueryRow(db.Ctx, fmt.Sprintf(`
+ // get total count of tuples from relation
+ if err := tx.QueryRow(ctx, fmt.Sprintf(`
SELECT COUNT(*)
FROM "%s"."%s"
`, modName, relName)).Scan(&res.RowCount); err != nil {
@@ -47,7 +48,7 @@ func GetPreview(id uuid.UUID, limit int, offset int) (interface{}, error) {
}
// get records from relation
- rows, err := db.Pool.Query(db.Ctx, fmt.Sprintf(`
+ rows, err := tx.Query(ctx, fmt.Sprintf(`
SELECT "%s"
FROM "%s"."%s"
ORDER BY "%s" ASC
@@ -60,13 +61,8 @@ func GetPreview(id uuid.UUID, limit int, offset int) (interface{}, error) {
defer rows.Close()
for rows.Next() {
- valuePointers := make([]interface{}, len(atrNames))
- valuesAll := make([]interface{}, len(atrNames))
- for i := 0; i < len(atrNames); i++ {
- valuePointers[i] = &valuesAll[i]
- }
-
- if err := rows.Scan(valuePointers...); err != nil {
+ valuesAll, err := rows.Values()
+ if err != nil {
return nil, err
}
res.Rows = append(res.Rows, valuesAll)
diff --git a/schema/role/role.go b/schema/role/role.go
index 47859f11..1ed25f54 100644
--- a/schema/role/role.go
+++ b/schema/role/role.go
@@ -1,20 +1,21 @@
package role
import (
+ "context"
"errors"
"fmt"
- "r3/db"
"r3/schema"
"r3/schema/caption"
+ "r3/schema/compatible"
"r3/types"
- "strings"
"github.com/gofrs/uuid"
"github.com/jackc/pgx/v5"
+ "github.com/jackc/pgx/v5/pgtype"
)
-func Del_tx(tx pgx.Tx, id uuid.UUID) error {
- _, err := tx.Exec(db.Ctx, `
+func Del_tx(ctx context.Context, tx pgx.Tx, id uuid.UUID) error {
+ _, err := tx.Exec(ctx, `
DELETE FROM app.role
WHERE id = $1
AND content <> 'everyone' -- cannot delete default role
@@ -22,10 +23,10 @@ func Del_tx(tx pgx.Tx, id uuid.UUID) error {
return err
}
-func Get(moduleId uuid.UUID) ([]types.Role, error) {
+func Get_tx(ctx context.Context, tx pgx.Tx, moduleId uuid.UUID) ([]types.Role, error) {
roles := make([]types.Role, 0)
- rows, err := db.Pool.Query(db.Ctx, `
+ rows, err := tx.Query(ctx, `
SELECT r.id, r.name, r.content, r.assignable, ARRAY(
SELECT role_id_child
FROM app.role_child
@@ -38,6 +39,7 @@ func Get(moduleId uuid.UUID) ([]types.Role, error) {
if err != nil {
return roles, err
}
+ defer rows.Close()
for rows.Next() {
var r types.Role
@@ -47,17 +49,16 @@ func Get(moduleId uuid.UUID) ([]types.Role, error) {
r.ModuleId = moduleId
roles = append(roles, r)
}
- rows.Close()
// get access & captions
for i, r := range roles {
- r, err = getAccess(r)
+ r, err = getAccess_tx(ctx, tx, r)
if err != nil {
return roles, err
}
- r.Captions, err = caption.Get("role", r.Id, []string{"roleTitle", "roleDesc"})
+ r.Captions, err = caption.Get_tx(ctx, tx, "role", r.Id, []string{"roleTitle", "roleDesc"})
if err != nil {
return roles, err
}
@@ -66,16 +67,19 @@ func Get(moduleId uuid.UUID) ([]types.Role, error) {
return roles, nil
}
-func getAccess(role types.Role) (types.Role, error) {
+func getAccess_tx(ctx context.Context, tx pgx.Tx, role types.Role) (types.Role, error) {
role.AccessApis = make(map[uuid.UUID]int)
role.AccessAttributes = make(map[uuid.UUID]int)
+ role.AccessClientEvents = make(map[uuid.UUID]int)
role.AccessCollections = make(map[uuid.UUID]int)
role.AccessRelations = make(map[uuid.UUID]int)
role.AccessMenus = make(map[uuid.UUID]int)
+ role.AccessWidgets = make(map[uuid.UUID]int)
- rows, err := db.Pool.Query(db.Ctx, `
- SELECT api_id, attribute_id, collection_id, menu_id, relation_id, access
+ rows, err := tx.Query(ctx, `
+ SELECT api_id, attribute_id, client_event_id, collection_id,
+ menu_id, relation_id, widget_id, access
FROM app.role_access
WHERE role_id = $1
`, role.Id)
@@ -85,61 +89,55 @@ func getAccess(role types.Role) (types.Role, error) {
defer rows.Close()
for rows.Next() {
- var apiId, attributeId, collectionId, menuId, relationId uuid.NullUUID
+ var apiId, attributeId, clientEventId, collectionId, menuId, relationId, widgetId pgtype.UUID
var access int
- if err := rows.Scan(&apiId, &attributeId, &collectionId,
- &menuId, &relationId, &access); err != nil {
+ if err := rows.Scan(&apiId, &attributeId, &clientEventId, &collectionId,
+ &menuId, &relationId, &widgetId, &access); err != nil {
return role, err
}
if apiId.Valid {
- role.AccessApis[apiId.UUID] = access
+ role.AccessApis[apiId.Bytes] = access
}
if attributeId.Valid {
- role.AccessAttributes[attributeId.UUID] = access
+ role.AccessAttributes[attributeId.Bytes] = access
+ }
+ if clientEventId.Valid {
+ role.AccessClientEvents[clientEventId.Bytes] = access
}
if collectionId.Valid {
- role.AccessCollections[collectionId.UUID] = access
+ role.AccessCollections[collectionId.Bytes] = access
}
if menuId.Valid {
- role.AccessMenus[menuId.UUID] = access
+ role.AccessMenus[menuId.Bytes] = access
}
if relationId.Valid {
- role.AccessRelations[relationId.UUID] = access
+ role.AccessRelations[relationId.Bytes] = access
+ }
+ if widgetId.Valid {
+ role.AccessWidgets[widgetId.Bytes] = access
}
}
return role, nil
}
-func Set_tx(tx pgx.Tx, role types.Role) error {
+func Set_tx(ctx context.Context, tx pgx.Tx, role types.Role) error {
if role.Name == "" {
return errors.New("missing name")
}
// compatibility fix: missing role content <3.0
- if role.Content == "" {
- if role.Name == "everyone" {
- role.Content = "everyone"
- } else if strings.Contains(strings.ToLower(role.Name), "admin") {
- role.Content = "admin"
- } else if strings.Contains(strings.ToLower(role.Name), "data") {
- role.Content = "other"
- } else if strings.Contains(strings.ToLower(role.Name), "csv") {
- role.Content = "other"
- } else {
- role.Content = "user"
- }
- }
+ role = compatible.FixMissingRoleContent(role)
- known, err := schema.CheckCreateId_tx(tx, &role.Id, "role", "id")
+ known, err := schema.CheckCreateId_tx(ctx, tx, &role.Id, "role", "id")
if err != nil {
return err
}
if known {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
UPDATE app.role
SET name = $1, content = $2, assignable = $3
WHERE id = $4
@@ -148,7 +146,7 @@ func Set_tx(tx pgx.Tx, role types.Role) error {
return err
}
} else {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
INSERT INTO app.role (id, module_id, name, content, assignable)
VALUES ($1,$2,$3,$4,$5)
`, role.Id, role.ModuleId, role.Name, role.Content, role.Assignable); err != nil {
@@ -157,14 +155,14 @@ func Set_tx(tx pgx.Tx, role types.Role) error {
}
// set children
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
DELETE FROM app.role_child
WHERE role_id = $1
`, role.Id); err != nil {
return err
}
for _, childId := range role.ChildrenIds {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
INSERT INTO app.role_child (role_id, role_id_child)
VALUES ($1,$2)
`, role.Id, childId); err != nil {
@@ -173,7 +171,7 @@ func Set_tx(tx pgx.Tx, role types.Role) error {
}
// set access
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
DELETE FROM app.role_access
WHERE role_id = $1
`, role.Id); err != nil {
@@ -181,36 +179,46 @@ func Set_tx(tx pgx.Tx, role types.Role) error {
}
for trgId, access := range role.AccessApis {
- if err := setAccess_tx(tx, role.Id, trgId, "api", access); err != nil {
+ if err := setAccess_tx(ctx, tx, role.Id, trgId, "api", access); err != nil {
return err
}
}
for trgId, access := range role.AccessAttributes {
- if err := setAccess_tx(tx, role.Id, trgId, "attribute", access); err != nil {
+ if err := setAccess_tx(ctx, tx, role.Id, trgId, "attribute", access); err != nil {
+ return err
+ }
+ }
+ for trgId, access := range role.AccessClientEvents {
+ if err := setAccess_tx(ctx, tx, role.Id, trgId, "client_event", access); err != nil {
return err
}
}
for trgId, access := range role.AccessCollections {
- if err := setAccess_tx(tx, role.Id, trgId, "collection", access); err != nil {
+ if err := setAccess_tx(ctx, tx, role.Id, trgId, "collection", access); err != nil {
return err
}
}
for trgId, access := range role.AccessMenus {
- if err := setAccess_tx(tx, role.Id, trgId, "menu", access); err != nil {
+ if err := setAccess_tx(ctx, tx, role.Id, trgId, "menu", access); err != nil {
return err
}
}
for trgId, access := range role.AccessRelations {
- if err := setAccess_tx(tx, role.Id, trgId, "relation", access); err != nil {
+ if err := setAccess_tx(ctx, tx, role.Id, trgId, "relation", access); err != nil {
+ return err
+ }
+ }
+ for trgId, access := range role.AccessWidgets {
+ if err := setAccess_tx(ctx, tx, role.Id, trgId, "widget", access); err != nil {
return err
}
}
// set captions
- return caption.Set_tx(tx, role.Id, role.Captions)
+ return caption.Set_tx(ctx, tx, role.Id, role.Captions)
}
-func setAccess_tx(tx pgx.Tx, roleId uuid.UUID, id uuid.UUID, entity string, access int) error {
+func setAccess_tx(ctx context.Context, tx pgx.Tx, roleId uuid.UUID, id uuid.UUID, entity string, access int) error {
// check valid access levels
switch entity {
@@ -222,6 +230,10 @@ func setAccess_tx(tx pgx.Tx, roleId uuid.UUID, id uuid.UUID, entity string, acce
if access < -1 || access > 2 {
return errors.New("invalid access level")
}
+ case "client_event": // 1 access client event
+ if access < -1 || access > 1 {
+ return errors.New("invalid access level")
+ }
case "collection": // 1 read collection
if access < -1 || access > 1 {
return errors.New("invalid access level")
@@ -234,6 +246,10 @@ func setAccess_tx(tx pgx.Tx, roleId uuid.UUID, id uuid.UUID, entity string, acce
if access < -1 || access > 3 {
return errors.New("invalid access level")
}
+ case "widget": // 1 access widget
+ if access < -1 || access > 1 {
+ return errors.New("invalid access level")
+ }
default:
return errors.New("invalid entity")
}
@@ -243,7 +259,7 @@ func setAccess_tx(tx pgx.Tx, roleId uuid.UUID, id uuid.UUID, entity string, acce
return nil
}
- _, err := tx.Exec(db.Ctx, fmt.Sprintf(`
+ _, err := tx.Exec(ctx, fmt.Sprintf(`
INSERT INTO app.role_access (role_id, %s_id, access)
VALUES ($1,$2,$3)
`, entity), roleId, id, access)
diff --git a/schema/schema.go b/schema/schema.go
index a9d6bb77..2791fdee 100644
--- a/schema/schema.go
+++ b/schema/schema.go
@@ -1,9 +1,9 @@
package schema
import (
+ "context"
"database/sql"
"fmt"
- "r3/db"
"github.com/gofrs/uuid"
"github.com/jackc/pgx/v5"
@@ -13,7 +13,7 @@ import (
// if nil, it is overwritten with a new one
// if not nil, it is checked whether the ID is known already
// returns whether the ID was already known
-func CheckCreateId_tx(tx pgx.Tx, id *uuid.UUID, relName string, pkName string) (bool, error) {
+func CheckCreateId_tx(ctx context.Context, tx pgx.Tx, id *uuid.UUID, relName string, pkName string) (bool, error) {
var err error
if *id == uuid.Nil {
@@ -22,7 +22,7 @@ func CheckCreateId_tx(tx pgx.Tx, id *uuid.UUID, relName string, pkName string) (
}
var known bool
- err = tx.QueryRow(db.Ctx, fmt.Sprintf(`
+ err = tx.QueryRow(ctx, fmt.Sprintf(`
SELECT EXISTS(SELECT * FROM app.%s WHERE "%s" = $1)
`, relName, pkName), id).Scan(&known)
@@ -46,14 +46,34 @@ func IsContentText(content string) bool {
return content == "varchar" || content == "text"
}
-// fully validates module dependencies
-func ValidateDependency_tx(tx pgx.Tx, moduleId uuid.UUID) error {
+// scheduler checks
+func GetValidAtDay(intervalType string, atDay int) int {
+ switch intervalType {
+ case "months":
+ // day < 1 would go to previous month, which is undesirable on a monthly interval
+ if atDay < 1 || atDay > 31 {
+ atDay = 1
+ }
+ case "weeks":
+ // 0 = Sunday, 6 = Saturday
+ if atDay < 0 || atDay > 6 {
+ atDay = 1
+ }
+ case "years":
+ if atDay > 365 {
+ atDay = 1
+ }
+ }
+ return atDay
+}
+// fully validates module dependencies
+func ValidateDependency_tx(ctx context.Context, tx pgx.Tx, moduleId uuid.UUID) error {
var cnt int
var name1, name2 sql.NullString
// check parent module without dependency
- if err := tx.QueryRow(db.Ctx, `
+ if err := tx.QueryRow(ctx, `
SELECT COUNT(*), STRING_AGG(name, ', ')
FROM app.module
WHERE id = (
@@ -76,7 +96,7 @@ func ValidateDependency_tx(tx pgx.Tx, moduleId uuid.UUID) error {
}
// check attribute relationships with external relations
- if err := tx.QueryRow(db.Ctx, `
+ if err := tx.QueryRow(ctx, `
SELECT COUNT(*), STRING_AGG(CONCAT(r.name, '.', a.name), ', ')
FROM app.attribute AS a
INNER JOIN app.relation AS r
@@ -106,7 +126,7 @@ func ValidateDependency_tx(tx pgx.Tx, moduleId uuid.UUID) error {
}
// check query relation access
- if err := tx.QueryRow(db.Ctx, `
+ if err := tx.QueryRow(ctx, `
SELECT COUNT(*), STRING_AGG(COALESCE(f.name,lf.name), ', ')
FROM app.query AS q
LEFT JOIN app.form AS f ON f.id = q.form_id -- query for form
@@ -143,7 +163,7 @@ func ValidateDependency_tx(tx pgx.Tx, moduleId uuid.UUID) error {
}
// check field access to external forms
- if err := tx.QueryRow(db.Ctx, `
+ if err := tx.QueryRow(ctx, `
SELECT COUNT(*), STRING_AGG(f3.name, ', '), STRING_AGG(f1.name, ', ')
FROM app.open_form AS of
INNER JOIN app.form AS f1 ON f1.id = of.form_id_open -- opened form
@@ -174,7 +194,7 @@ func ValidateDependency_tx(tx pgx.Tx, moduleId uuid.UUID) error {
}
// check collection access to external forms
- if err := tx.QueryRow(db.Ctx, `
+ if err := tx.QueryRow(ctx, `
SELECT COUNT(*), STRING_AGG(f.name, ', ')
FROM app.open_form AS of
INNER JOIN app.form AS f ON f.id = of.form_id_open
@@ -205,7 +225,7 @@ func ValidateDependency_tx(tx pgx.Tx, moduleId uuid.UUID) error {
}
// check relation policy access to external roles
- if err := tx.QueryRow(db.Ctx, `
+ if err := tx.QueryRow(ctx, `
SELECT COUNT(*), STRING_AGG(r.name, ', ')
FROM app.relation_policy AS rp
INNER JOIN app.role AS r ON r.id = rp.role_id
@@ -233,10 +253,37 @@ func ValidateDependency_tx(tx pgx.Tx, moduleId uuid.UUID) error {
name1.String)
}
- // check menu access to external forms
- if err := tx.QueryRow(db.Ctx, `
+ // check trigger access to external relations
+ if err := tx.QueryRow(ctx, `
+ SELECT COUNT(*), STRING_AGG(r.name, ', ')
+ FROM app.pg_trigger AS t
+ INNER JOIN app.relation AS r ON r.id = t.relation_id
+ INNER JOIN app.module AS m ON m.id = t.module_id AND m.id = $1
+
+ -- dependency
+ WHERE r.id NOT IN (
+ SELECT id
+ FROM app.relation
+ WHERE module_id = m.id
+ OR module_id IN (
+ SELECT module_id_on
+ FROM app.module_depends
+ WHERE module_id = m.id
+ )
+ )
+ `, moduleId).Scan(&cnt, &name1); err != nil {
+ return err
+ }
+
+ if cnt != 0 {
+ return fmt.Errorf("dependency check failed, trigger functions accessing relations(s) '%s' from independent module(s)",
+ name1.String)
+ }
+
+ // check widget access to external forms
+ if err := tx.QueryRow(ctx, `
SELECT COUNT(*), STRING_AGG(f.name, ', ')
- FROM app.menu AS h
+ FROM app.widget AS h
INNER JOIN app.form AS f ON f.id = h.form_id
INNER JOIN app.module AS m
ON m.id = h.module_id
@@ -257,13 +304,43 @@ func ValidateDependency_tx(tx pgx.Tx, moduleId uuid.UUID) error {
return err
}
+ if cnt != 0 {
+ return fmt.Errorf("dependency check failed, widgets(s) accessing form(s) '%s' from independent module(s)",
+ name1.String)
+ }
+
+ // check menu access to external forms
+ if err := tx.QueryRow(ctx, `
+ SELECT COUNT(*), STRING_AGG(f.name, ', ')
+ FROM app.menu AS h
+ INNER JOIN app.form AS f ON f.id = h.form_id
+ INNER JOIN app.menu_tab AS mt ON mt.id = h.menu_tab_id
+ INNER JOIN app.module AS m
+ ON m.id = mt.module_id
+ AND m.id = $1
+
+ -- dependency
+ WHERE f.id NOT IN (
+ SELECT id
+ FROM app.form
+ WHERE module_id = m.id
+ OR module_id IN (
+ SELECT module_id_on
+ FROM app.module_depends
+ WHERE module_id = m.id
+ )
+ )
+ `, moduleId).Scan(&cnt, &name1); err != nil {
+ return err
+ }
+
if cnt != 0 {
return fmt.Errorf("dependency check failed, menu(s) accessing form(s) '%s' from independent module(s)",
name1.String)
}
// check access to external icons
- if err := tx.QueryRow(db.Ctx, `
+ if err := tx.QueryRow(ctx, `
SELECT COUNT(*)
FROM app.icon
WHERE id IN (
@@ -318,7 +395,7 @@ func ValidateDependency_tx(tx pgx.Tx, moduleId uuid.UUID) error {
}
// check PG function access to external pgFunctions/modules/relations/attributes
- if err := tx.QueryRow(db.Ctx, `
+ if err := tx.QueryRow(ctx, `
SELECT COUNT(*)
FROM app.module
WHERE id IN (
@@ -384,7 +461,7 @@ func ValidateDependency_tx(tx pgx.Tx, moduleId uuid.UUID) error {
}
// check JS function access to external pgFunctions/jsFunctions/forms/fields/roles
- if err := tx.QueryRow(db.Ctx, `
+ if err := tx.QueryRow(ctx, `
SELECT COUNT(*)
FROM app.module
WHERE id IN (
@@ -462,7 +539,7 @@ func ValidateDependency_tx(tx pgx.Tx, moduleId uuid.UUID) error {
}
// check field (button/data) & form function access to external JS functions
- if err := tx.QueryRow(db.Ctx, `
+ if err := tx.QueryRow(ctx, `
SELECT COUNT(*)
FROM app.module
WHERE id IN (
@@ -489,6 +566,17 @@ func ValidateDependency_tx(tx pgx.Tx, moduleId uuid.UUID) error {
FROM app.form
WHERE module_id = $2
)
+
+ UNION
+
+ SELECT fv.js_function_id
+ FROM app.field_variable AS fv
+ JOIN app.field AS f ON f.id = fv.field_id
+ WHERE f.form_id IN (
+ SELECT id
+ FROM app.form
+ WHERE module_id = $2
+ )
UNION
@@ -518,7 +606,7 @@ func ValidateDependency_tx(tx pgx.Tx, moduleId uuid.UUID) error {
}
// check role membership inside external parent roles
- if err := tx.QueryRow(db.Ctx, `
+ if err := tx.QueryRow(ctx, `
SELECT COUNT(*), STRING_AGG(r.name, ', ')
FROM app.role AS r
INNER JOIN app.module AS m
@@ -549,7 +637,7 @@ func ValidateDependency_tx(tx pgx.Tx, moduleId uuid.UUID) error {
}
// check data presets without dependency
- if err := tx.QueryRow(db.Ctx, `
+ if err := tx.QueryRow(ctx, `
SELECT COUNT(*)
FROM app.preset
WHERE id IN (
diff --git a/schema/tab/tab.go b/schema/tab/tab.go
index 8f2ec558..dd26bb33 100644
--- a/schema/tab/tab.go
+++ b/schema/tab/tab.go
@@ -1,13 +1,13 @@
package tab
import (
+ "context"
"errors"
"fmt"
- "r3/db"
"r3/schema"
"r3/schema/caption"
- "r3/tools"
"r3/types"
+ "slices"
"github.com/gofrs/uuid"
"github.com/jackc/pgx/v5"
@@ -15,19 +15,19 @@ import (
var allowedEntities = []string{"field"}
-func Del_tx(tx pgx.Tx, id uuid.UUID) error {
- _, err := tx.Exec(db.Ctx, `DELETE FROM app.tab WHERE id = $1`, id)
+func Del_tx(ctx context.Context, tx pgx.Tx, id uuid.UUID) error {
+ _, err := tx.Exec(ctx, `DELETE FROM app.tab WHERE id = $1`, id)
return err
}
-func Get(entity string, entityId uuid.UUID) ([]types.Tab, error) {
+func Get_tx(ctx context.Context, tx pgx.Tx, entity string, entityId uuid.UUID) ([]types.Tab, error) {
tabs := make([]types.Tab, 0)
- if !tools.StringInSlice(entity, allowedEntities) {
+ if !slices.Contains(allowedEntities, entity) {
return tabs, errors.New("bad entity")
}
- rows, err := db.Pool.Query(db.Ctx, fmt.Sprintf(`
+ rows, err := tx.Query(ctx, fmt.Sprintf(`
SELECT id, content_counter, state
FROM app.tab
WHERE %s_id = $1
@@ -47,27 +47,26 @@ func Get(entity string, entityId uuid.UUID) ([]types.Tab, error) {
}
for i, tab := range tabs {
- tab.Captions, err = caption.Get("tab", tab.Id, []string{"tabTitle"})
+ tabs[i].Captions, err = caption.Get_tx(ctx, tx, "tab", tab.Id, []string{"tabTitle"})
if err != nil {
return tabs, err
}
- tabs[i] = tab
}
return tabs, nil
}
-func Set_tx(tx pgx.Tx, entity string, entityId uuid.UUID, position int, tab types.Tab) (uuid.UUID, error) {
- if !tools.StringInSlice(entity, allowedEntities) {
+func Set_tx(ctx context.Context, tx pgx.Tx, entity string, entityId uuid.UUID, position int, tab types.Tab) (uuid.UUID, error) {
+ if !slices.Contains(allowedEntities, entity) {
return tab.Id, errors.New("bad entity")
}
- known, err := schema.CheckCreateId_tx(tx, &tab.Id, "tab", "id")
+ known, err := schema.CheckCreateId_tx(ctx, tx, &tab.Id, "tab", "id")
if err != nil {
return tab.Id, err
}
if known {
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
UPDATE app.tab
SET position = $1, content_counter = $2, state = $3
WHERE id = $4
@@ -75,12 +74,12 @@ func Set_tx(tx pgx.Tx, entity string, entityId uuid.UUID, position int, tab type
return tab.Id, err
}
} else {
- if _, err := tx.Exec(db.Ctx, fmt.Sprintf(`
+ if _, err := tx.Exec(ctx, fmt.Sprintf(`
INSERT INTO app.tab (id, %s_id, position, content_counter, state)
VALUES ($1,$2,$3,$4,$5)
`, entity), tab.Id, entityId, position, tab.ContentCounter, tab.State); err != nil {
return tab.Id, err
}
}
- return tab.Id, caption.Set_tx(tx, tab.Id, tab.Captions)
+ return tab.Id, caption.Set_tx(ctx, tx, tab.Id, tab.Captions)
}
diff --git a/schema/variable/variable.go b/schema/variable/variable.go
new file mode 100644
index 00000000..41c5984b
--- /dev/null
+++ b/schema/variable/variable.go
@@ -0,0 +1,69 @@
+package variable
+
+import (
+ "context"
+ "r3/schema"
+ "r3/types"
+
+ "github.com/gofrs/uuid"
+ "github.com/jackc/pgx/v5"
+)
+
+func Del_tx(ctx context.Context, tx pgx.Tx, id uuid.UUID) error {
+ _, err := tx.Exec(ctx, `DELETE FROM app.variable WHERE id = $1`, id)
+ return err
+}
+
+func Get_tx(ctx context.Context, tx pgx.Tx, moduleId uuid.UUID) ([]types.Variable, error) {
+
+ variables := make([]types.Variable, 0)
+ rows, err := tx.Query(ctx, `
+ SELECT v.id, v.form_id, v.name, v.comment, v.content, v.content_use, v.def
+ FROM app.variable AS v
+ LEFT JOIN app.form AS f ON f.id = v.form_id
+ WHERE v.module_id = $1
+ ORDER BY
+ f.name ASC NULLS FIRST,
+ v.name ASC
+ `, moduleId)
+ if err != nil {
+ return variables, err
+ }
+ defer rows.Close()
+
+ for rows.Next() {
+ var v types.Variable
+ v.ModuleId = moduleId
+ if err := rows.Scan(&v.Id, &v.FormId, &v.Name, &v.Comment, &v.Content, &v.ContentUse, &v.Def); err != nil {
+ return variables, err
+ }
+ variables = append(variables, v)
+ }
+ return variables, nil
+}
+
+func Set_tx(ctx context.Context, tx pgx.Tx, v types.Variable) error {
+
+ known, err := schema.CheckCreateId_tx(ctx, tx, &v.Id, "variable", "id")
+ if err != nil {
+ return err
+ }
+
+ if known {
+ if _, err := tx.Exec(ctx, `
+ UPDATE app.variable
+ SET name = $1, comment = $2, content = $3, content_use = $4, def = $5
+ WHERE id = $6
+ `, v.Name, v.Comment, v.Content, v.ContentUse, v.Def, v.Id); err != nil {
+ return err
+ }
+ } else {
+ if _, err := tx.Exec(ctx, `
+ INSERT INTO app.variable (id, module_id, form_id, name, comment, content, content_use, def)
+ VALUES ($1,$2,$3,$4,$5,$6,$7,$8)
+ `, v.Id, v.ModuleId, v.FormId, v.Name, v.Comment, v.Content, v.ContentUse, v.Def); err != nil {
+ return err
+ }
+ }
+ return nil
+}
diff --git a/schema/widget/widget.go b/schema/widget/widget.go
new file mode 100644
index 00000000..864dd06f
--- /dev/null
+++ b/schema/widget/widget.go
@@ -0,0 +1,89 @@
+package widget
+
+import (
+ "context"
+ "r3/schema"
+ "r3/schema/caption"
+ "r3/schema/collection/consumer"
+ "r3/types"
+
+ "github.com/gofrs/uuid"
+ "github.com/jackc/pgx/v5"
+)
+
+func Del_tx(ctx context.Context, tx pgx.Tx, id uuid.UUID) error {
+ _, err := tx.Exec(ctx, `DELETE FROM app.widget WHERE id = $1`, id)
+ return err
+}
+
+func Get_tx(ctx context.Context, tx pgx.Tx, moduleId uuid.UUID) ([]types.Widget, error) {
+
+ widgets := make([]types.Widget, 0)
+ rows, err := tx.Query(ctx, `
+ SELECT id, form_id, name, size
+ FROM app.widget
+ WHERE module_id = $1
+ ORDER BY name ASC
+ `, moduleId)
+ if err != nil {
+ return widgets, err
+ }
+ defer rows.Close()
+
+ for rows.Next() {
+ var w types.Widget
+ if err := rows.Scan(&w.Id, &w.FormId, &w.Name, &w.Size); err != nil {
+ return widgets, err
+ }
+ w.ModuleId = moduleId
+ widgets = append(widgets, w)
+ }
+
+ // get collections & captions
+ for i, w := range widgets {
+ widgets[i].Captions, err = caption.Get_tx(ctx, tx, "widget", w.Id, []string{"widgetTitle"})
+ if err != nil {
+ return widgets, err
+ }
+ widgets[i].Collection, err = consumer.GetOne_tx(ctx, tx, "widget", w.Id, "widgetDisplay")
+ if err != nil {
+ return widgets, err
+ }
+ }
+ return widgets, nil
+}
+
+func Set_tx(ctx context.Context, tx pgx.Tx, widget types.Widget) error {
+
+ known, err := schema.CheckCreateId_tx(ctx, tx, &widget.Id, "widget", "id")
+ if err != nil {
+ return err
+ }
+
+ if known {
+ if _, err := tx.Exec(ctx, `
+ UPDATE app.widget
+ SET form_id = $1, name = $2, size = $3
+ WHERE id = $4
+ `, widget.FormId, widget.Name, widget.Size, widget.Id); err != nil {
+ return err
+ }
+ } else {
+ if _, err := tx.Exec(ctx, `
+ INSERT INTO app.widget (id,module_id,form_id,name,size)
+ VALUES ($1,$2,$3,$4,$5)
+ `, widget.Id, widget.ModuleId, widget.FormId, widget.Name, widget.Size); err != nil {
+ return err
+ }
+ }
+
+ // set collection
+ if err := consumer.Set_tx(ctx, tx, "widget", widget.Id, "widgetDisplay",
+ []types.CollectionConsumer{widget.Collection}); err != nil {
+
+ return err
+ }
+
+ // set captions
+ return caption.Set_tx(ctx, tx, widget.Id, widget.Captions)
+}
diff --git a/setting/setting.go b/setting/setting.go
deleted file mode 100644
index dbe49e14..00000000
--- a/setting/setting.go
+++ /dev/null
@@ -1,98 +0,0 @@
-package setting
-
-import (
- "errors"
- "fmt"
- "r3/db"
- "r3/types"
-
- "github.com/jackc/pgx/v5"
- "github.com/jackc/pgx/v5/pgtype"
-)
-
-func Get(loginId pgtype.Int8, loginTemplateId pgtype.Int8) (types.Settings, error) {
-
- var s types.Settings
- if (loginId.Valid && loginTemplateId.Valid) || (!loginId.Valid && !loginTemplateId.Valid) {
- return s, errors.New("settings can only be retrieved for either login or login template")
- }
-
- entryId := loginId.Int64
- entryName := "login_id"
-
- if loginTemplateId.Valid {
- entryId = loginTemplateId.Int64
- entryName = "login_template_id"
- }
-
- err := db.Pool.QueryRow(db.Ctx, fmt.Sprintf(`
- SELECT language_code, date_format, sunday_first_dow, font_size, borders_all,
- borders_corner, page_limit, header_captions, spacing, dark, compact,
- hint_update_version, mobile_scroll_form, warn_unsaved, menu_colored,
- pattern, font_family, tab_remember, field_clean
- FROM instance.login_setting
- WHERE %s = $1
- `, entryName), entryId).Scan(&s.LanguageCode, &s.DateFormat, &s.SundayFirstDow,
- &s.FontSize, &s.BordersAll, &s.BordersCorner, &s.PageLimit,
- &s.HeaderCaptions, &s.Spacing, &s.Dark, &s.Compact, &s.HintUpdateVersion,
- &s.MobileScrollForm, &s.WarnUnsaved, &s.MenuColored, &s.Pattern,
- &s.FontFamily, &s.TabRemember, &s.FieldClean)
-
- return s, err
-}
-
-func Set_tx(tx pgx.Tx, loginId pgtype.Int8, loginTemplateId pgtype.Int8, s types.Settings, isNew bool) error {
-
- if (loginId.Valid && loginTemplateId.Valid) || (!loginId.Valid && !loginTemplateId.Valid) {
- return errors.New("settings can only be applied for either login or login template")
- }
-
- var err error
- entryId := loginId.Int64
- entryName := "login_id"
-
- if loginTemplateId.Valid {
- entryId = loginTemplateId.Int64
- entryName = "login_template_id"
- }
-
- if isNew {
- _, err = tx.Exec(db.Ctx, fmt.Sprintf(`
- INSERT INTO instance.login_setting (%s, language_code, date_format,
- sunday_first_dow, font_size, borders_all, borders_corner, page_limit,
- header_captions, spacing, dark, compact, hint_update_version,
- mobile_scroll_form, warn_unsaved, menu_colored, pattern, font_family,
- tab_remember, field_clean)
- VALUES ($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13,$14,$15,$16,$17,$18,$19,$20)
- `, entryName), entryId, s.LanguageCode, s.DateFormat, s.SundayFirstDow,
- s.FontSize, s.BordersAll, s.BordersCorner, s.PageLimit,
- s.HeaderCaptions, s.Spacing, s.Dark, s.Compact, s.HintUpdateVersion,
- s.MobileScrollForm, s.WarnUnsaved, s.MenuColored, s.Pattern,
- s.FontFamily, s.TabRemember, s.FieldClean)
- } else {
- _, err = tx.Exec(db.Ctx, fmt.Sprintf(`
- UPDATE instance.login_setting
- SET language_code = $1, date_format = $2, sunday_first_dow = $3,
- font_size = $4, borders_all = $5, borders_corner = $6,
- page_limit = $7, header_captions = $8, spacing = $9, dark = $10,
- compact = $11, hint_update_version = $12, mobile_scroll_form = $13,
- warn_unsaved = $14, menu_colored = $15, pattern = $16,
- font_family = $17, tab_remember = $18, field_clean = $19
- WHERE %s = $20
- `, entryName), s.LanguageCode, s.DateFormat, s.SundayFirstDow, s.FontSize, s.BordersAll,
- s.BordersCorner, s.PageLimit, s.HeaderCaptions, s.Spacing, s.Dark,
- s.Compact, s.HintUpdateVersion, s.MobileScrollForm, s.WarnUnsaved,
- s.MenuColored, s.Pattern, s.FontFamily, s.TabRemember, s.FieldClean,
- entryId)
- }
- return err
-}
-
-func SetLanguageCode_tx(tx pgx.Tx, id int64, languageCode string) error {
- _, err := tx.Exec(db.Ctx, `
- UPDATE instance.login_setting
- SET language_code = $1
- WHERE login_id = $2
- `, languageCode, id)
- return err
-}
diff --git a/mail/attach/attach.go b/spooler/mail_attach/mail_attach.go
similarity index 81%
rename from mail/attach/attach.go
rename to spooler/mail_attach/mail_attach.go
index 539fd063..d1722309 100644
--- a/mail/attach/attach.go
+++ b/spooler/mail_attach/mail_attach.go
@@ -1,7 +1,8 @@
-package attach
+package mail_attach
import (
"bytes"
+ "context"
"fmt"
"io"
"os"
@@ -18,7 +19,7 @@ import (
func DoAll() error {
mails := make([]types.Mail, 0)
- rows, err := db.Pool.Query(db.Ctx, `
+ rows, err := db.Pool.Query(context.Background(), `
SELECT id, record_id_wofk, attribute_id
FROM instance.mail_spool
WHERE outgoing = FALSE
@@ -28,6 +29,7 @@ func DoAll() error {
if err != nil {
return err
}
+ defer rows.Close()
for rows.Next() {
var m types.Mail
@@ -37,7 +39,6 @@ func DoAll() error {
}
mails = append(mails, m)
}
- rows.Close()
for _, m := range mails {
if err := do(m); err != nil {
@@ -54,20 +55,18 @@ func do(mail types.Mail) error {
// check validity of record and attributes to attach files to
atr, exists := cache.AttributeIdMap[mail.AttributeId.Bytes]
if !exists {
- return fmt.Errorf("cannot attach file(s) to unknown attribute %s",
- mail.AttributeId.Bytes)
+ return fmt.Errorf("cannot attach file(s) to unknown attribute %s", mail.AttributeId.String())
}
if !schema.IsContentFiles(atr.Content) {
- return fmt.Errorf("cannot attach file(s) to non-file attribute %s",
- mail.AttributeId.Bytes)
+ return fmt.Errorf("cannot attach file(s) to non-file attribute %s", mail.AttributeId.String())
}
// get files from spooler
fileIds := make([]uuid.UUID, 0)
filesMail := make([]types.MailFile, 0)
- rows, err := db.Pool.Query(db.Ctx, `
+ rows, err := db.Pool.Query(context.Background(), `
SELECT file, file_name, file_size
FROM instance.mail_spool_file
WHERE mail_id = $1
@@ -75,6 +74,7 @@ func do(mail types.Mail) error {
if err != nil {
return err
}
+ defer rows.Close()
for rows.Next() {
var f types.MailFile
@@ -90,11 +90,10 @@ func do(mail types.Mail) error {
fileIds = append(fileIds, f.Id)
filesMail = append(filesMail, f)
}
- rows.Close()
// no attachments to process, just delete mail
if len(filesMail) == 0 {
- _, err = db.Pool.Exec(db.Ctx, `
+ _, err = db.Pool.Exec(context.Background(), `
DELETE FROM instance.mail_spool
WHERE id = $1
`, mail.Id)
@@ -126,16 +125,19 @@ func do(mail types.Mail) error {
// store file changes
// update the database only after all files have physically been saved
- tx, err := db.Pool.Begin(db.Ctx)
+ ctx, ctxCanc := context.WithTimeout(context.Background(), db.CtxDefTimeoutSysTask)
+ defer ctxCanc()
+
+ tx, err := db.Pool.Begin(ctx)
if err != nil {
return err
}
- defer tx.Rollback(db.Ctx)
+ defer tx.Rollback(ctx)
fileIdMapChange := make(map[uuid.UUID]types.DataSetFileChange)
rel, _ := cache.RelationIdMap[atr.RelationId]
for _, f := range filesMail {
- if err := data.FileApplyVersion_tx(db.Ctx, tx, true, atr.Id, rel.Id,
+ if err := data.FileApplyVersion_tx(ctx, tx, true, atr.Id, rel.Id,
f.Id, f.Hash, f.Name, f.Size, 0, []int64{mail.RecordId.Int64}, -1); err != nil {
return err
@@ -147,17 +149,17 @@ func do(mail types.Mail) error {
Version: -1,
}
}
- if err := data.FilesApplyAttributChanges_tx(db.Ctx, tx, mail.RecordId.Int64,
+ if err := data.FilesApplyAttributChanges_tx(ctx, tx, mail.RecordId.Int64,
atr.Id, fileIdMapChange); err != nil {
return err
}
// all done, delete mail
- if _, err := tx.Exec(db.Ctx, `
+ if _, err := tx.Exec(ctx, `
DELETE FROM instance.mail_spool
WHERE id = $1
`, mail.Id); err != nil {
return err
}
- return tx.Commit(db.Ctx)
+ return tx.Commit(ctx)
}
diff --git a/mail/receive/receive.go b/spooler/mail_receive/mail_receive.go
similarity index 72%
rename from mail/receive/receive.go
rename to spooler/mail_receive/mail_receive.go
index a9008b6b..a83ba0a5 100644
--- a/mail/receive/receive.go
+++ b/spooler/mail_receive/mail_receive.go
@@ -1,14 +1,17 @@
-package receive
+package mail_receive
import (
+ "context"
"crypto/tls"
"encoding/base64"
"errors"
"fmt"
"io"
"r3/cache"
+ "r3/config"
"r3/db"
"r3/log"
+ "r3/tools"
"r3/types"
"regexp"
"strings"
@@ -52,6 +55,23 @@ func DoAll() error {
func do(ma types.MailAccount) error {
+ // get OAuth client token if used
+ usesXoauth2 := ma.OauthClientId.Valid
+ if usesXoauth2 {
+ if !config.GetLicenseActive() {
+ return errors.New("no valid license (required for OAuth clients)")
+ }
+ c, err := cache.GetOauthClient(ma.OauthClientId.Int32)
+ if err != nil {
+ return err
+ }
+ ma.Password, err = tools.GetOAuthToken(c.ClientId, c.ClientSecret, c.Tenant, c.TokenUrl, c.Scopes)
+ if err != nil {
+ return err
+ }
+ }
+
+ // start IMAP client
var c *client.Client
var err error
@@ -74,8 +94,14 @@ func do(ma types.MailAccount) error {
}
}
- if err := c.Login(ma.Username, ma.Password); err != nil {
- return err
+ if usesXoauth2 {
+ if err := c.Authenticate(newXoauth2Client(ma.Username, ma.Password)); err != nil {
+ return err
+ }
+ } else {
+ if err := c.Login(ma.Username, ma.Password); err != nil {
+ return err
+ }
}
mbox, err := c.Select(imapFolder, false)
@@ -146,8 +172,7 @@ func do(ma types.MailAccount) error {
return nil
}
-func processMessage(mailAccountId int32, msg *imap.Message,
- section *imap.BodySectionName) error {
+func processMessage(mailAccountId int32, msg *imap.Message, section *imap.BodySectionName) error {
if msg == nil {
return errors.New("server did not return message")
@@ -219,8 +244,7 @@ func processMessage(mailAccountId int32, msg *imap.Message,
if strings.Contains(headerType, "text") {
- // some senders include both HTML and plain text
- // in these cases, we only want the HTML version
+ // some senders include both HTML and plain text - in these cases, we only want the HTML version
if gotHtmlText {
continue
}
@@ -229,7 +253,15 @@ func processMessage(mailAccountId int32, msg *imap.Message,
if err != nil {
return err
}
- body = string(b)
+
+ if headerType == "text/plain" {
+ // replace 2 new lines with a paragraph, 1 new line with a line break
+ body = regexp.MustCompile(`(.*)(\r\n){2,}`).ReplaceAllString(string(b), "$1
")
+ body = regexp.MustCompile(`(.*)(\n){2,}`).ReplaceAllString(body, "$1
")
+ body = regexp.MustCompile(`[\r\n]+`).ReplaceAllString(body, "
")
+ } else {
+ body = string(b)
+ }
if headerType == "text/html" {
gotHtmlText = true
@@ -255,8 +287,6 @@ func processMessage(mailAccountId int32, msg *imap.Message,
}
case *mail.AttachmentHeader:
-
- // attachment
name, err := h.Filename()
if err != nil {
return err
@@ -267,6 +297,14 @@ func processMessage(mailAccountId int32, msg *imap.Message,
return err
}
+ if name == "" {
+ // name is not always given, regular case: Outlook forwards messages without file name
+ contentType, _, err := h.ContentType()
+ if err == nil && contentType == "message/rfc822" {
+ name = "ForwardedMessage.eml"
+ }
+ }
+
files = append(files, types.MailFile{
File: b,
Name: name,
@@ -294,40 +332,53 @@ func processMessage(mailAccountId int32, msg *imap.Message,
}
}
- // store message in spooler
- tx, err := db.Pool.Begin(db.Ctx)
+ ctx, ctxCanc := context.WithTimeout(context.Background(), db.CtxDefTimeoutSysTask)
+ defer ctxCanc()
+
+ tx, err := db.Pool.Begin(ctx)
if err != nil {
return err
}
+ defer tx.Rollback(ctx)
+ // log to mail traffic log
+ fileList := make([]string, 0)
+ for _, file := range files {
+ fileList = append(fileList, fmt.Sprintf("%s (%dkb)", file.Name, file.Size))
+ }
+ if _, err := tx.Exec(ctx, `
+ INSERT INTO instance.mail_traffic (from_list, to_list, cc_list,
+ subject, date, files, mail_account_id, outgoing)
+ VALUES ($1,$2,$3,$4,$5,$6,$7,FALSE)
+ `, getStringListFromAddress(from), getStringListFromAddress(to), getStringListFromAddress(cc),
+ subject, date.Unix(), fileList, mailAccountId); err != nil {
+
+ return fmt.Errorf("%w, %s", errors.New("failed to store message in traffic log"), err)
+ }
+
+ // store message in spooler
var mailId int64
- if err := tx.QueryRow(db.Ctx, `
+ if err := tx.QueryRow(ctx, `
INSERT INTO instance.mail_spool (from_list, to_list, cc_list,
subject, body, date, mail_account_id, outgoing)
VALUES ($1,$2,$3,$4,$5,$6,$7,FALSE)
RETURNING id
- `, getStringListFromAddress(from),
- getStringListFromAddress(to),
- getStringListFromAddress(cc),
+ `, getStringListFromAddress(from), getStringListFromAddress(to), getStringListFromAddress(cc),
subject, body, date.Unix(), mailAccountId).Scan(&mailId); err != nil {
- tx.Rollback(db.Ctx)
return fmt.Errorf("%w, %s", errors.New("failed to store message in spooler"), err)
}
// add attachments to spooler
for i, file := range files {
- if _, err := tx.Exec(db.Ctx, `
- INSERT INTO instance.mail_spool_file (
- mail_id, position, file, file_name, file_size)
+ if _, err := tx.Exec(ctx, `
+ INSERT INTO instance.mail_spool_file (mail_id, position, file, file_name, file_size)
VALUES ($1,$2,$3,$4,$5)
`, mailId, i, file.File, file.Name, file.Size); err != nil {
-
- tx.Rollback(db.Ctx)
return fmt.Errorf("%w, %s", errors.New("failed to store message attachment in spooler"), err)
}
}
- return tx.Commit(db.Ctx)
+ return tx.Commit(ctx)
}
// helpers
diff --git a/spooler/mail_receive/mail_receive_xoauth2.go b/spooler/mail_receive/mail_receive_xoauth2.go
new file mode 100644
index 00000000..5f28afae
--- /dev/null
+++ b/spooler/mail_receive/mail_receive_xoauth2.go
@@ -0,0 +1,39 @@
+package mail_receive
+
+import (
+ "encoding/json"
+ "fmt"
+
+ "github.com/emersion/go-sasl"
+)
+
+// implement XOAUTH2 separately as go-imap removed support for it
+type xoauth2Client struct {
+ Username string
+ Token string
+}
+type Xoauth2Error struct {
+ Status string `json:"status"`
+ Schemes string `json:"schemes"`
+ Scope string `json:"scope"`
+}
+
+func (a *xoauth2Client) Start() (mech string, ir []byte, err error) {
+ mech = "XOAUTH2"
+ ir = []byte("user=" + a.Username + "\x01auth=Bearer " + a.Token + "\x01\x01")
+ return
+}
+func (a *xoauth2Client) Next(challenge []byte) ([]byte, error) {
+ xoauth2Err := &Xoauth2Error{}
+ if err := json.Unmarshal(challenge, xoauth2Err); err != nil {
+ return nil, err
+ } else {
+ return nil, xoauth2Err
+ }
+}
+func (err *Xoauth2Error) Error() string {
+ return fmt.Sprintf("XOAUTH2 authentication error (%v)", err.Status)
+}
+func newXoauth2Client(username, token string) sasl.Client {
+ return &xoauth2Client{username, token}
+}
diff --git a/mail/send/send.go b/spooler/mail_send/mail_send.go
similarity index 50%
rename from mail/send/send.go
rename to spooler/mail_send/mail_send.go
index 96718e92..e747dbf2 100644
--- a/mail/send/send.go
+++ b/spooler/mail_send/mail_send.go
@@ -1,10 +1,13 @@
-package send
+package mail_send
import (
+ "context"
"crypto/tls"
+ "errors"
"fmt"
- "net/smtp"
+ "os"
"r3/cache"
+ "r3/config"
"r3/data"
"r3/db"
"r3/log"
@@ -13,7 +16,7 @@ import (
"r3/types"
"strings"
- "github.com/jordan-wright/email"
+ "github.com/wneessen/go-mail"
)
var (
@@ -31,7 +34,7 @@ func DoAll() error {
now := tools.GetTimeUnix()
mails := make([]types.Mail, 0)
- rows, err := db.Pool.Query(db.Ctx, `
+ rows, err := db.Pool.Query(context.Background(), `
SELECT id, to_list, cc_list, bcc_list, subject, body, attempt_count,
mail_account_id, record_id_wofk, attribute_id
FROM instance.mail_spool
@@ -42,6 +45,7 @@ func DoAll() error {
if err != nil {
return err
}
+ defer rows.Close()
for rows.Next() {
var m types.Mail
@@ -54,7 +58,6 @@ func DoAll() error {
}
mails = append(mails, m)
}
- rows.Close()
log.Info("mail", fmt.Sprintf("found %d messages to be sent", len(mails)))
@@ -66,7 +69,7 @@ func DoAll() error {
log.Error("mail", fmt.Sprintf("is unable to send (attempt %d)",
m.AttemptCount+1), err)
- if _, err := db.Pool.Exec(db.Ctx, `
+ if _, err := db.Pool.Exec(context.Background(), `
UPDATE instance.mail_spool
SET attempt_count = $1, attempt_date = $2
WHERE id = $3
@@ -79,7 +82,7 @@ func DoAll() error {
// everything went well, delete spool entry
log.Info("mail", "successfully sent message")
- if _, err := db.Pool.Exec(db.Ctx, `
+ if _, err := db.Pool.Exec(context.Background(), `
DELETE FROM instance.mail_spool
WHERE id = $1
`, m.Id); err != nil {
@@ -106,44 +109,67 @@ func do(m types.Mail) error {
return err
}
+ // get OAuth client token if used
+ if ma.OauthClientId.Valid {
+ if !config.GetLicenseActive() {
+ return errors.New("no valid license (required for OAuth clients)")
+ }
+ c, err := cache.GetOauthClient(ma.OauthClientId.Int32)
+ if err != nil {
+ return err
+ }
+ ma.Password, err = tools.GetOAuthToken(c.ClientId, c.ClientSecret, c.Tenant, c.TokenUrl, c.Scopes)
+ if err != nil {
+ return err
+ }
+ }
+
// build mail
- e := email.NewEmail()
- e.From = ma.SendAs
+ msg := mail.NewMsg()
+ msg.Subject(m.Subject)
+
+ if err := msg.From(ma.SendAs); err != nil {
+ return err
+ }
if m.ToList != "" {
- e.To = strings.Split(m.ToList, ",")
+ if err := msg.To(strings.Split(m.ToList, ",")...); err != nil {
+ return err
+ }
}
if m.CcList != "" {
- e.Cc = strings.Split(m.CcList, ",")
+ if err := msg.Cc(strings.Split(m.CcList, ",")...); err != nil {
+ return err
+ }
}
if m.BccList != "" {
- e.Bcc = strings.Split(m.BccList, ",")
+ if err := msg.Bcc(strings.Split(m.BccList, ",")...); err != nil {
+ return err
+ }
}
- e.Subject = m.Subject
-
// dirty trick to assume body content by looking for beginning of HTML tag
// we should find a way to store our preference when sending mails
if strings.Contains(m.Body, "<") {
- e.HTML = []byte(m.Body)
+ msg.SetBodyString(mail.TypeTextHTML, m.Body)
} else {
- e.Text = []byte(m.Body)
+ msg.SetBodyString(mail.TypeTextPlain, m.Body)
}
// parse attachments from file attribute, if set
+ fileList := make([]string, 0)
if m.RecordId.Valid && m.AttributeId.Valid {
atr, exists := cache.AttributeIdMap[m.AttributeId.Bytes]
if !exists {
- return fmt.Errorf("cannot attach file(s) from unknown attribute %s",
- m.AttributeId.Bytes)
+ return fmt.Errorf("cannot attach file(s) from unknown attribute %s", m.AttributeId.String())
}
if !schema.IsContentFiles(atr.Content) {
- return fmt.Errorf("cannot attach file(s) from non-file attribute %s",
- m.AttributeId.Bytes)
+ return fmt.Errorf("cannot attach file(s) from non-file attribute %s", m.AttributeId.String())
}
+ files := make([]types.DataGetValueFile, 0)
- rows, err := db.Pool.Query(db.Ctx, fmt.Sprintf(`
+ rows, err := db.Pool.Query(context.Background(), fmt.Sprintf(`
SELECT r.file_id, r.name, (
SELECT MAX(v.version)
FROM instance.file_version AS v
@@ -151,12 +177,13 @@ func do(m types.Mail) error {
)
FROM instance_file."%s" AS r
WHERE r.record_id = $1
+ AND r.date_delete IS NULL
`, schema.GetFilesTableName(atr.Id)), m.RecordId.Int64)
if err != nil {
return err
}
- files := make([]types.DataGetValueFile, 0)
+ defer rows.Close()
for rows.Next() {
var f types.DataGetValueFile
@@ -165,40 +192,81 @@ func do(m types.Mail) error {
}
files = append(files, f)
}
- rows.Close()
for _, f := range files {
filePath := data.GetFilePathVersion(f.Id, f.Version)
-
- exists, err = tools.Exists(filePath)
+ fileInfo, err := os.Stat(filePath)
if err != nil {
+ if os.IsNotExist(err) {
+ log.Error("mail", "could not attach file to message",
+ fmt.Errorf("'%s' does not exist, ignoring it", filePath))
+
+ continue
+ }
return err
}
- if !exists {
- log.Warning("mail", "could not attach file to message",
- fmt.Errorf("'%s' does not exist, ignoring it", filePath))
- continue
- }
+ fileList = append(fileList, fmt.Sprintf("%s (%dkb)", f.Name, fileInfo.Size()/1024))
- att, err := e.AttachFile(filePath)
- if err != nil {
- return err
- }
- att.Filename = f.Name
+ msg.AttachFile(filePath, getAttachedFileWithName(f.Name))
}
}
+ // send mail
log.Info("mail", fmt.Sprintf("sending message (%d attachments)",
- len(e.Attachments)))
+ len(msg.GetAttachments())))
+
+ client, err := mail.NewClient(ma.HostName, mail.WithPort(int(ma.HostPort)),
+ mail.WithUsername(ma.Username),
+ mail.WithPassword(ma.Password),
+ mail.WithTLSConfig(&tls.Config{ServerName: ma.HostName}))
+
+ if err != nil {
+ return err
+ }
- // send mail with SMTP
- auth := smtp.PlainAuth("", ma.Username, ma.Password, ma.HostName)
+ // use SSL if STARTTLS is disabled - otherwise STARTTLS is attempted
+ client.SetSSL(!ma.StartTls)
+
+ // apply authentication method
+ switch ma.AuthMethod {
+ case "login":
+ client.SetSMTPAuth(mail.SMTPAuthLogin)
+ case "plain":
+ client.SetSMTPAuth(mail.SMTPAuthPlain)
+ case "xoauth2":
+ client.SetSMTPAuth(mail.SMTPAuthXOAUTH2)
+ default:
+ return fmt.Errorf("unsupported authentication method '%s'", ma.AuthMethod)
+ }
- if ma.StartTls {
- return e.Send(fmt.Sprintf("%s:%d", ma.HostName, ma.HostPort), auth)
+ // send message
+ if err := client.DialWithContext(context.Background()); err != nil {
+ return err
+ }
+ if err := client.Send(msg); err != nil {
+ return err
}
+ if err := client.Close(); err != nil {
+ // some mail services do not cleanly close their connections
+ // we should not care too much if the email was successfully sent - still warn as this is not correct behavior
+ log.Warning("mail", "failed to disconnect from SMTP server", err)
+ }
+
+ // add to mail traffic log
+ _, err = db.Pool.Exec(context.Background(), `
+ INSERT INTO instance.mail_traffic (from_list, to_list, cc_list,
+ subject, date, files, mail_account_id, outgoing)
+ VALUES ($1,$2,$3,$4,$5,$6,$7,TRUE)
+ `, m.FromList, m.ToList, m.CcList, m.Subject,
+ tools.GetTimeUnix(), fileList, m.AccountId)
- return e.SendWithTLS(fmt.Sprintf("%s:%d", ma.HostName, ma.HostPort), auth,
- &tls.Config{ServerName: ma.HostName})
+ return err
+}
+
+// helper
+func getAttachedFileWithName(n string) mail.FileOption {
+ return func(f *mail.File) {
+ f.Name = n
+ }
}
diff --git a/spooler/mail_send/mail_send_o365login.go b/spooler/mail_send/mail_send_o365login.go
new file mode 100644
index 00000000..d85b477c
--- /dev/null
+++ b/spooler/mail_send/mail_send_o365login.go
@@ -0,0 +1,32 @@
+package mail_send
+
+import (
+ "errors"
+ "net/smtp"
+)
+
+// legacy O365 SMTP login auth
+type loginAuthSimple struct {
+ username string
+ password string
+}
+
+func o365LoginAuth(username, password string) smtp.Auth {
+ return &loginAuthSimple{username, password}
+}
+func (a *loginAuthSimple) Start(server *smtp.ServerInfo) (string, []byte, error) {
+ return "LOGIN", []byte{}, nil
+}
+func (a *loginAuthSimple) Next(fromServer []byte, more bool) ([]byte, error) {
+ if more {
+ switch string(fromServer) {
+ case "Username:":
+ return []byte(a.username), nil
+ case "Password:":
+ return []byte(a.password), nil
+ default:
+ return nil, errors.New("Unknown fromServer")
+ }
+ }
+ return nil, nil
+}
diff --git a/spooler/rest_send/rest_send.go b/spooler/rest_send/rest_send.go
new file mode 100644
index 00000000..16aa44af
--- /dev/null
+++ b/spooler/rest_send/rest_send.go
@@ -0,0 +1,157 @@
+// for executing REST calls from instance spooler
+
+package rest_send
+
+import (
+ "context"
+ "fmt"
+ "io"
+ "net/http"
+ "r3/cache"
+ "r3/config"
+ "r3/db"
+ "r3/log"
+ "strings"
+
+ "github.com/gofrs/uuid"
+ "github.com/jackc/pgx/v5/pgtype"
+)
+
+var (
+ attemptsAllow = 5 // how many attempts for each REST call before quitting
+ callLimit = 100 // how many REST calls to execute per loop
+)
+
+type restCall struct {
+ id uuid.UUID
+ pgFunctionIdCallback pgtype.UUID
+ method string
+ headers map[string]string
+ url string
+ body pgtype.Text
+ callbackValue pgtype.Text
+ skipVerify bool
+}
+
+func DoAll() error {
+ for true {
+ anySuccess := false
+
+ // collect spooled REST calls
+ rows, err := db.Pool.Query(context.Background(), `
+ SELECT id, pg_function_id_callback, method, headers,
+ url, body, callback_value, skip_verify
+ FROM instance.rest_spool
+ WHERE attempt_count < $1
+ ORDER BY date_added ASC
+ LIMIT $2
+ `, attemptsAllow, callLimit)
+ if err != nil {
+ return err
+ }
+ defer rows.Close()
+
+ calls := make([]restCall, 0)
+ for rows.Next() {
+ var c restCall
+ if err := rows.Scan(&c.id, &c.pgFunctionIdCallback, &c.method, &c.headers,
+ &c.url, &c.body, &c.callbackValue, &c.skipVerify); err != nil {
+
+ return err
+ }
+ calls = append(calls, c)
+ }
+ rows.Close()
+
+ for _, c := range calls {
+ if err := callExecute(c); err != nil {
+ log.Error("api", fmt.Sprintf("failed to execute REST call %s '%s'", c.method, c.url), err)
+
+ _, err := db.Pool.Exec(context.Background(), `
+ UPDATE instance.rest_spool
+ SET attempt_count = attempt_count + 1
+ WHERE id = $1
+ `, c.id)
+
+ if err != nil {
+ log.Error("api", "failed to update call attempt count", err)
+ }
+ continue
+ }
+ anySuccess = true
+ }
+
+ // exit if limit is not reached or no call was successful
+ if len(calls) < callLimit || !anySuccess {
+ break
+ }
+ }
+ return nil
+}
+
+func callExecute(c restCall) error {
+ log.Info("api", fmt.Sprintf("is calling %s '%s'", c.method, c.url))
+
+ httpReq, err := http.NewRequest(c.method, c.url, strings.NewReader(c.body.String))
+ if err != nil {
+ return fmt.Errorf("could not prepare request, %s", err)
+ }
+
+ httpReq.Header.Set("User-Agent", "r3-application")
+ for k, v := range c.headers {
+ httpReq.Header.Set(k, v)
+ }
+
+ httpClient, err := config.GetHttpClient(c.skipVerify, 30)
+ if err != nil {
+ return err
+ }
+
+ httpRes, err := httpClient.Do(httpReq)
+ if err != nil {
+ return err
+ }
+ defer httpRes.Body.Close()
+
+ // successfully executed
+ // execute callback if enabled
+ ctx, ctxCanc := context.WithTimeout(context.Background(), db.CtxDefTimeoutPgFunc)
+ defer ctxCanc()
+
+ tx, err := db.Pool.Begin(ctx)
+ if err != nil {
+ return err
+ }
+ defer tx.Rollback(ctx)
+
+ if c.pgFunctionIdCallback.Valid {
+ bodyRaw, err := io.ReadAll(httpRes.Body)
+ if err != nil {
+ return fmt.Errorf("could not read response body, %s", err)
+ }
+
+ fnc, exists := cache.PgFunctionIdMap[c.pgFunctionIdCallback.Bytes]
+ if !exists {
+ return fmt.Errorf("unknown function '%s'", c.pgFunctionIdCallback.String())
+ }
+ mod, exists := cache.ModuleIdMap[fnc.ModuleId]
+ if !exists {
+ return fmt.Errorf("unknown module '%s'", fnc.ModuleId)
+ }
+
+ if _, err := tx.Exec(ctx, fmt.Sprintf(`SELECT "%s"."%s"($1,$2,$3)`,
+ mod.Name, fnc.Name), httpRes.StatusCode, bodyRaw, c.callbackValue); err != nil {
+
+ return err
+ }
+ }
+
+ // delete REST call from spooler
+ if _, err := tx.Exec(ctx, `
+ DELETE FROM instance.rest_spool
+ WHERE id = $1
+ `, c.id); err != nil {
+ return err
+ }
+ return tx.Commit(ctx)
+}
diff --git a/task/task.go b/task/task.go
deleted file mode 100644
index 3e98fc45..00000000
--- a/task/task.go
+++ /dev/null
@@ -1,31 +0,0 @@
-package task
-
-import (
- "fmt"
- "r3/db"
-
- "github.com/jackc/pgx/v5"
-)
-
-func Set_tx(tx pgx.Tx, name string, interval int64, active bool) error {
- var activeOnly bool
-
- if err := tx.QueryRow(db.Ctx, `
- SELECT active_only
- FROM instance.task
- WHERE name = $1
- `, name).Scan(&activeOnly); err != nil {
- return err
- }
-
- if activeOnly && !active {
- return fmt.Errorf("cannot disable active-only task")
- }
-
- _, err := tx.Exec(db.Ctx, `
- UPDATE instance.task
- SET interval_seconds = $1, active = $2
- WHERE name = $3
- `, interval, active, name)
- return err
-}
diff --git a/tools/compress/compress.go b/tools/compress/compress.go
new file mode 100644
index 00000000..487fb389
--- /dev/null
+++ b/tools/compress/compress.go
@@ -0,0 +1,54 @@
+package compress
+
+import (
+ "archive/zip"
+ "io"
+ "os"
+ "path/filepath"
+ "strings"
+)
+
+func Path(zipPath string, sourcePath string) error {
+
+ zipFile, err := os.Create(zipPath)
+ if err != nil {
+ return err
+ }
+ defer zipFile.Close()
+
+ zipWriter := zip.NewWriter(zipFile)
+ defer zipWriter.Close()
+
+ return filepath.Walk(sourcePath, func(pathWalked string, info os.FileInfo, err error) error {
+ if err != nil {
+ return err
+ }
+
+ // directories do not need to be created in zip files
+ if info.IsDir() {
+ return nil
+ }
+
+ // ignore non-regular files (symbolic links, devices, named pipes, sockets, ...)
+ if !info.Mode().IsRegular() {
+ return nil
+ }
+
+ fileWalked, err := os.Open(pathWalked)
+ if err != nil {
+ return err
+ }
+ defer fileWalked.Close()
+
+ // trim source directory from file path
+ pathWalkedRel := strings.TrimPrefix(pathWalked, filepath.Dir(sourcePath)+string(os.PathSeparator))
+
+ zipFileWriter, err := zipWriter.Create(pathWalkedRel)
+ if err != nil {
+ return err
+ }
+
+ _, err = io.Copy(zipFileWriter, fileWalked)
+ return err
+ })
+}
diff --git a/tools/files.go b/tools/files.go
index c1423043..c0b81333 100644
--- a/tools/files.go
+++ b/tools/files.go
@@ -14,6 +14,18 @@ import (
"github.com/h2non/filetype"
)
+func GetFileContents(filePath string, removeUtf8Bom bool) ([]byte, error) {
+
+ output, err := ioutil.ReadFile(filePath)
+ if err != nil {
+ return []byte("{}"), err
+ }
+ if removeUtf8Bom {
+ output = RemoveUtf8Bom(output)
+ }
+ return output, nil
+}
+
func GetFileHash(filePath string) (string, error) {
file, err := os.Open(filePath)
if err != nil {
@@ -87,7 +99,7 @@ func FileMove(src string, dst string, copyModTime bool) error {
if err := os.Remove(src); err != nil {
// source file could not be deleted, delete copied file
- // this is done to not keep an unconsistent state
+ // this is done to not keep an inconsistent state
if err := os.Remove(dst); err != nil {
return err
}
@@ -140,20 +152,6 @@ func FileCopy(src string, dst string, copyModTime bool) error {
return nil
}
-// set file to read only in file system
-func FileSetRead(filePath string) error {
-
- // set read permissions for owner only
- // windows only supports owner bit, for linux owner is fine as files are only supposed to be accessed by owner
- return os.Chmod(filePath, 0400)
-}
-func FileSetWrite(filePath string) error {
-
- // set write permissions for owner only
- // windows only supports owner bit, for linux owner is fine as files are only supposed to be accessed by owner
- return os.Chmod(filePath, 0600)
-}
-
func Exists(path string) (bool, error) {
_, err := os.Stat(path)
if err == nil {
@@ -175,29 +173,3 @@ func PathCreateIfNotExists(path string, perm fs.FileMode) error {
}
return os.Mkdir(path, perm)
}
-
-func IsEmpty(path string) (bool, error) {
- f, err := os.Open(path)
- if err != nil {
- return false, err
- }
- defer f.Close()
-
- _, err = f.Readdirnames(1)
- if err == io.EOF {
- return true, nil
- }
- return false, err
-}
-
-func RemoveIfExists(path string) error {
- exists, err := Exists(path)
- if err != nil {
- return err
- }
-
- if !exists {
- return nil
- }
- return os.Remove(path)
-}
diff --git a/tools/oauth.go b/tools/oauth.go
new file mode 100644
index 00000000..1bbb42da
--- /dev/null
+++ b/tools/oauth.go
@@ -0,0 +1,21 @@
+package tools
+
+import (
+ "context"
+
+ "golang.org/x/oauth2/clientcredentials"
+)
+
+func GetOAuthToken(clientId string, clientSecret string, tenant string, tokenUrl string, scopes []string) (string, error) {
+ conf := clientcredentials.Config{
+ ClientID: clientId,
+ ClientSecret: clientSecret,
+ TokenURL: tokenUrl,
+ Scopes: scopes,
+ }
+ token, err := conf.Token(context.TODO())
+ if err != nil {
+ return "", err
+ }
+ return token.AccessToken, nil
+}
diff --git a/tools/queryBuilder.go b/tools/queryBuilder.go
index 63c33c50..3c86e539 100644
--- a/tools/queryBuilder.go
+++ b/tools/queryBuilder.go
@@ -55,15 +55,14 @@ func (qb *QueryBuilder) AddPara(name string, value interface{}) {
qb.cParas[name] = value
}
-func (qb *QueryBuilder) Set(component string, value interface{}) {
- switch component {
- case "FROM":
- qb.cFrom = value.(string)
- case "LIMIT":
- qb.cLimit = value.(int)
- case "OFFSET":
- qb.cOffset = value.(int)
- }
+func (qb *QueryBuilder) SetFrom(value string) {
+ qb.cFrom = value
+}
+func (qb *QueryBuilder) SetOffset(value int) {
+ qb.cOffset = value
+}
+func (qb *QueryBuilder) SetLimit(value int) {
+ qb.cLimit = value
}
func (qb *QueryBuilder) Reset(component string) {
switch component {
diff --git a/tools/time.go b/tools/time.go
new file mode 100644
index 00000000..16c206b5
--- /dev/null
+++ b/tools/time.go
@@ -0,0 +1,21 @@
+package tools
+
+import "time"
+
+func GetTimeUnix() int64 {
+ return time.Now().UTC().Unix()
+}
+func GetTimeUnixMilli() int64 {
+ return time.Now().UTC().UnixNano() / int64(time.Millisecond)
+}
+func GetTimeSql() string {
+ // 2006-01-02 15:04:05 has to be used to recognize format!
+ return time.Now().UTC().Format("2006-01-02 15:04:05")
+}
+func GetTimeFromSql(sqlTime string) (time.Time, error) {
+ t, err := time.Parse("2006-01-02 15:04:05", sqlTime)
+ if err != nil {
+ return t, err
+ }
+ return t, nil
+}
diff --git a/tools/tools.go b/tools/tools.go
index e52e6d2b..aa7787b2 100644
--- a/tools/tools.go
+++ b/tools/tools.go
@@ -2,15 +2,10 @@ package tools
import (
"bytes"
- "io/ioutil"
"math/rand"
- "os"
"strconv"
"strings"
"time"
-
- "github.com/gofrs/uuid"
- "github.com/jackc/pgx/v5/pgtype"
)
func init() {
@@ -18,63 +13,18 @@ func init() {
rand.Seed(time.Now().UnixNano())
}
-func GetTimeUnix() int64 {
- return time.Now().UTC().Unix()
-}
-func GetTimeUnixMilli() int64 {
- return time.Now().UTC().UnixNano() / int64(time.Millisecond)
-}
-func GetTimeSql() string {
- // 2006-01-02 15:04:05 has to be used to recognize format!
- return time.Now().UTC().Format("2006-01-02 15:04:05")
-}
-func GetTimeFromSql(sqlTime string) (time.Time, error) {
- t, err := time.Parse("2006-01-02 15:04:05", sqlTime)
- if err != nil {
- return t, err
- }
- return t, nil
-}
-
-func CheckCreateDir(dir string) error {
- exists, err := Exists(dir)
-
- if err != nil {
- return err
- }
-
- if !exists {
- if err := os.MkdirAll(dir, os.FileMode(0770)); err != nil {
- return err
+func Substring(s string, start, end int) string {
+ ctr, index0 := 0, 0
+ for index1 := range s {
+ if ctr == start {
+ index0 = index1
}
- }
- return nil
-}
-func CheckCreateFile(file string, templateFile string) error {
- exists, err := Exists(file)
-
- if err != nil {
- return err
- }
-
- if !exists {
- if err := FileCopy(templateFile, file, false); err != nil {
- return err
+ if ctr == end {
+ return s[index0:index1]
}
+ ctr++
}
- return nil
-}
-
-func GetFileContents(filePath string, removeUtf8Bom bool) ([]byte, error) {
-
- output, err := ioutil.ReadFile(filePath)
- if err != nil {
- return []byte("{}"), err
- }
- if removeUtf8Bom {
- output = RemoveUtf8Bom(output)
- }
- return output, nil
+ return s[index0:]
}
func RemoveUtf8Bom(input []byte) []byte {
@@ -100,59 +50,6 @@ func StringListToUInt64Array(input string) ([]uint64, error) {
return output, nil
}
-func StringInSlice(needle string, haystack []string) bool {
- for _, value := range haystack {
- if value == needle {
- return true
- }
- }
- return false
-}
-
-func IntInSlice(needle int, haystack []int) bool {
- for _, value := range haystack {
- if value == needle {
- return true
- }
- }
- return false
-}
-
-func Int64InSlice(needle int64, haystack []int64) bool {
- for _, value := range haystack {
- if value == needle {
- return true
- }
- }
- return false
-}
-
-func Uint64InSlice(needle uint64, haystack []uint64) bool {
- for _, value := range haystack {
- if value == needle {
- return true
- }
- }
- return false
-}
-
-func UuidInSlice(needle uuid.UUID, haystack []uuid.UUID) bool {
- for _, value := range haystack {
- if value == needle {
- return true
- }
- }
- return false
-}
-
-func UuidStringToNullUuid(input string) pgtype.UUID {
- id, err := uuid.FromString(input)
- return pgtype.UUID{
- Bytes: id,
- Valid: err == nil,
- }
-}
-
func RandStringRunes(n int) string {
var letterRunes = []rune("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789")
diff --git a/transfer/transfer.go b/transfer/transfer.go
index 49687f24..a28a1312 100644
--- a/transfer/transfer.go
+++ b/transfer/transfer.go
@@ -5,6 +5,7 @@ package transfer
import (
"archive/zip"
+ "context"
"crypto"
"crypto/rsa"
"crypto/sha256"
@@ -19,11 +20,9 @@ import (
"path/filepath"
"r3/cache"
"r3/config"
- "r3/db"
- "r3/module_option"
+ "r3/config/module_meta"
"r3/tools"
"r3/types"
- "strconv"
"sync"
"github.com/gofrs/uuid"
@@ -39,7 +38,7 @@ func StoreExportKey(key string) {
exportKey = key
}
-func AddVersion_tx(tx pgx.Tx, moduleId uuid.UUID) error {
+func AddVersion_tx(ctx context.Context, tx pgx.Tx, moduleId uuid.UUID) error {
cache.Schema_mx.RLock()
defer cache.Schema_mx.RUnlock()
@@ -51,14 +50,8 @@ func AddVersion_tx(tx pgx.Tx, moduleId uuid.UUID) error {
return errors.New("module does not exist")
}
- _, _, appBuild, _ := config.GetAppVersions()
- appBuildInt, err := strconv.Atoi(appBuild)
- if err != nil {
- return err
- }
-
// update version info
- file.Content.Module.ReleaseBuildApp = appBuildInt
+ file.Content.Module.ReleaseBuildApp = config.GetAppVersion().Build
file.Content.Module.ReleaseBuild = file.Content.Module.ReleaseBuild + 1
file.Content.Module.ReleaseDate = tools.GetTimeUnix()
@@ -68,27 +61,25 @@ func AddVersion_tx(tx pgx.Tx, moduleId uuid.UUID) error {
return err
}
- if err := module_option.SetHashById_tx(tx, moduleId, hashedStr); err != nil {
+ if err := module_meta.SetHash_tx(ctx, tx, moduleId, hashedStr); err != nil {
return err
}
- if _, err := tx.Exec(db.Ctx, `
+ _, err = tx.Exec(ctx, `
UPDATE app.module
SET release_build_app = $1, release_build = $2,
release_date = $3
WHERE id = $4
`, file.Content.Module.ReleaseBuildApp,
file.Content.Module.ReleaseBuild,
- file.Content.Module.ReleaseDate, moduleId); err != nil {
+ file.Content.Module.ReleaseDate, moduleId)
- return err
- }
- return nil
+ return err
}
-// start with 1 module and check whether it or its dependend upon modules had changed
+// start with 1 module and check whether it or its dependent upon modules had changed
// returns map of module IDs, changed yes/no
-func GetModuleChangedWithDependencies(moduleId uuid.UUID) (map[uuid.UUID]bool, error) {
+func GetModuleChangedWithDependencies_tx(ctx context.Context, tx pgx.Tx, moduleId uuid.UUID) (map[uuid.UUID]bool, error) {
cache.Schema_mx.RLock()
defer cache.Schema_mx.RUnlock()
@@ -111,14 +102,13 @@ func GetModuleChangedWithDependencies(moduleId uuid.UUID) (map[uuid.UUID]bool, e
var file types.TransferFile
file.Content.Module = module
- moduleIdMapChecked[id], err = hasModuleChanged(file)
+ moduleIdMapChecked[id], err = hasModuleChanged_tx(ctx, tx, file)
if err != nil {
return err
}
// check dependencies
for _, moduleIdDependsOn := range module.DependsOn {
-
if err := checkRecursive(moduleIdDependsOn, moduleIdMapChecked); err != nil {
return err
}
@@ -133,24 +123,18 @@ func GetModuleChangedWithDependencies(moduleId uuid.UUID) (map[uuid.UUID]bool, e
}
// verifies that the importing module matches the running application build
-func verifyCompatibilityWithApp(moduleId uuid.UUID, releaseBuildApp int) error {
-
- _, _, appBuild, _ := config.GetAppVersions()
- appBuildInt, err := strconv.Atoi(appBuild)
- if err != nil {
- return err
- }
+func verifyCompatibilityWithApp(releaseBuildApp int) error {
- if appBuildInt < releaseBuildApp {
+ if config.GetAppVersion().Build < releaseBuildApp {
return fmt.Errorf("module was released for application version %d (current version %d)",
- releaseBuildApp, appBuildInt)
+ releaseBuildApp, config.GetAppVersion().Build)
}
return nil
}
// verifies that the raw content of JSON file matches given signature
-// verifiy raw content, as target JSON might have different structure (new elements due to schema change)
-// returns error if verification failes, also module hash
+// verify raw content, as target JSON might have different structure (new elements due to schema change)
+// returns error if verification fails, also module hash
func verifyContent(jsonFileData *[]byte) ([32]byte, error) {
var hashed [32]byte
@@ -284,14 +268,15 @@ func writeFilesToZip(zipPath string, filePaths []string) error {
}
// returns whether the module inside the given transfer file has changed
-// checked against the stored module hash from the last module version change
-func hasModuleChanged(file types.TransferFile) (bool, error) {
+//
+// checked against the stored module hash from the last module version change
+func hasModuleChanged_tx(ctx context.Context, tx pgx.Tx, file types.TransferFile) (bool, error) {
hashedStr, err := getModuleHashFromFile(file)
if err != nil {
return false, err
}
- hashedStrEx, err := module_option.GetHashById(file.Content.Module.Id)
+ hashedStrEx, err := module_meta.GetHash_tx(ctx, tx, file.Content.Module.Id)
if err != nil {
return false, err
}
diff --git a/transfer/transfer_delete/transfer_delete.go b/transfer/transfer_delete/transfer_delete.go
index 08fc8230..662e218a 100644
--- a/transfer/transfer_delete/transfer_delete.go
+++ b/transfer/transfer_delete/transfer_delete.go
@@ -1,13 +1,15 @@
package transfer_delete
import (
+ "context"
"encoding/json"
"errors"
"fmt"
- "r3/db"
"r3/log"
+ "r3/schema/api"
"r3/schema/article"
"r3/schema/attribute"
+ "r3/schema/clientEvent"
"r3/schema/collection"
"r3/schema/column"
"r3/schema/field"
@@ -15,7 +17,7 @@ import (
"r3/schema/icon"
"r3/schema/jsFunction"
"r3/schema/loginForm"
- "r3/schema/menu"
+ "r3/schema/menuTab"
"r3/schema/pgFunction"
"r3/schema/pgIndex"
"r3/schema/pgTrigger"
@@ -23,8 +25,10 @@ import (
"r3/schema/relation"
"r3/schema/role"
"r3/schema/tab"
- "r3/tools"
+ "r3/schema/variable"
+ "r3/schema/widget"
"r3/types"
+ "slices"
"github.com/gofrs/uuid"
"github.com/jackc/pgx/v5"
@@ -32,269 +36,281 @@ import (
// delete entities from local schema which are not present in module
// FKs are deferred, only known hard dependency for order: triggers must be deleted first
-func NotExisting_tx(tx pgx.Tx, module types.Module) error {
+func NotExisting_tx(ctx context.Context, tx pgx.Tx, module types.Module) error {
// PG triggers are deleted before import, known issues:
// * DB error if preset changes fire triggers that are deleted later
// * DB error if PG functions are deleted before referring triggers
// login forms
- if err := deleteLoginForms_tx(tx, module.Id, module.LoginForms); err != nil {
+ if err := deleteLoginForms_tx(ctx, tx, module.Id, module.LoginForms); err != nil {
return err
}
// relations, its PG indexes, attributes and presets
- if err := deleteRelations_tx(tx, module.Id, module.Relations); err != nil {
+ if err := deleteRelations_tx(ctx, tx, module.Id, module.Relations); err != nil {
return err
}
- if err := deleteRelationPgIndexes_tx(tx, module.Id, module.Relations); err != nil {
+ if err := deleteRelationPgIndexes_tx(ctx, tx, module.Id, module.Relations); err != nil {
return err
}
- if err := deleteRelationAttributes_tx(tx, module.Id, module.Relations); err != nil {
+ if err := deleteRelationAttributes_tx(ctx, tx, module.Id, module.Relations); err != nil {
return err
}
- if err := deleteRelationPresets_tx(tx, module.Id, module.Relations); err != nil {
+ if err := deleteRelationPresets_tx(ctx, tx, module.Id, module.Relations); err != nil {
return err
}
// collections
- if err := deleteCollections_tx(tx, module.Id, module.Collections); err != nil {
+ if err := deleteCollections_tx(ctx, tx, module.Id, module.Collections); err != nil {
return err
}
// PG functions
- if err := deletePgFunctions_tx(tx, module.Id, module.PgFunctions); err != nil {
+ if err := deletePgFunctions_tx(ctx, tx, module.Id, module.PgFunctions); err != nil {
return err
}
// roles
- if err := deleteRoles_tx(tx, module.Id, module.Roles); err != nil {
+ if err := deleteRoles_tx(ctx, tx, module.Id, module.Roles); err != nil {
return err
}
- // menus
- if err := deleteMenus_tx(tx, module.Id, module.Menus); err != nil {
+ // menu tabs
+ if err := deleteMenuTabs_tx(ctx, tx, module.Id, module.MenuTabs); err != nil {
return err
}
// forms, cascades fields
- if err := deleteForms_tx(tx, module.Id, module.Forms); err != nil {
+ if err := deleteForms_tx(ctx, tx, module.Id, module.Forms); err != nil {
return err
}
// icons
- if err := deleteIcons_tx(tx, module.Id, module.Icons); err != nil {
+ if err := deleteIcons_tx(ctx, tx, module.Id, module.Icons); err != nil {
return err
}
// articles
- if err := deleteArticles_tx(tx, module.Id, module.Articles); err != nil {
+ if err := deleteArticles_tx(ctx, tx, module.Id, module.Articles); err != nil {
+ return err
+ }
+
+ // APIs
+ if err := deleteApis_tx(ctx, tx, module.Id, module.Apis); err != nil {
+ return err
+ }
+
+ // client events
+ if err := deleteClientEvents_tx(ctx, tx, module.Id, module.ClientEvents); err != nil {
+ return err
+ }
+
+ // variables
+ if err := deleteVariables_tx(ctx, tx, module.Id, module.Variables); err != nil {
+ return err
+ }
+
+ // widgets
+ if err := deleteWidgets_tx(ctx, tx, module.Id, module.Widgets); err != nil {
return err
}
// JS functions
- if err := deleteJsFunctions_tx(tx, module.Id, module.JsFunctions); err != nil {
+ if err := deleteJsFunctions_tx(ctx, tx, module.Id, module.JsFunctions); err != nil {
return err
}
return nil
}
-func NotExistingPgTriggers_tx(tx pgx.Tx, moduleId uuid.UUID, relations []types.Relation) error {
- return deletePgTriggers_tx(tx, moduleId, relations)
+func NotExistingPgTriggers_tx(ctx context.Context, tx pgx.Tx, moduleId uuid.UUID, pgTriggers []types.PgTrigger) error {
+ return deletePgTriggers_tx(ctx, tx, moduleId, pgTriggers)
}
// deletions
-func deleteLoginForms_tx(tx pgx.Tx, moduleId uuid.UUID, loginForms []types.LoginForm) error {
+func deleteLoginForms_tx(ctx context.Context, tx pgx.Tx, moduleId uuid.UUID, loginForms []types.LoginForm) error {
idsKeep := make([]uuid.UUID, 0)
for _, entity := range loginForms {
idsKeep = append(idsKeep, entity.Id)
}
- idsDelete, err := importGetIdsToDeleteFromModule_tx(tx, "login_form", moduleId, idsKeep)
+ idsDelete, err := importGetIdsToDeleteFromModule_tx(ctx, tx, "login_form", moduleId, idsKeep)
if err != nil {
return err
}
for _, id := range idsDelete {
log.Info("transfer", fmt.Sprintf("del login form %s", id.String()))
- if err := loginForm.Del_tx(tx, id); err != nil {
+ if err := loginForm.Del_tx(ctx, tx, id); err != nil {
return err
}
}
return nil
}
-func deletePgTriggers_tx(tx pgx.Tx, moduleId uuid.UUID, relations []types.Relation) error {
+func deletePgTriggers_tx(ctx context.Context, tx pgx.Tx, moduleId uuid.UUID, pgTriggers []types.PgTrigger) error {
idsKeep := make([]uuid.UUID, 0)
- for _, rel := range relations {
- for _, trg := range rel.Triggers {
- idsKeep = append(idsKeep, trg.Id)
- }
+ for _, trg := range pgTriggers {
+ idsKeep = append(idsKeep, trg.Id)
}
- idsDelete, err := importGetIdsToDeleteFromRelation_tx(tx, "pg_trigger", moduleId, idsKeep)
+ idsDelete, err := importGetIdsToDeleteFromModule_tx(ctx, tx, "pg_trigger", moduleId, idsKeep)
if err != nil {
return err
}
for _, id := range idsDelete {
log.Info("transfer", fmt.Sprintf("del PG trigger %s", id.String()))
- if err := pgTrigger.Del_tx(tx, id); err != nil {
+ if err := pgTrigger.Del_tx(ctx, tx, id); err != nil {
return err
}
}
return nil
}
-func deleteRelations_tx(tx pgx.Tx, moduleId uuid.UUID, relations []types.Relation) error {
+func deleteRelations_tx(ctx context.Context, tx pgx.Tx, moduleId uuid.UUID, relations []types.Relation) error {
idsKeep := make([]uuid.UUID, 0)
for _, entity := range relations {
idsKeep = append(idsKeep, entity.Id)
}
- idsDelete, err := importGetIdsToDeleteFromModule_tx(tx, "relation", moduleId, idsKeep)
+ idsDelete, err := importGetIdsToDeleteFromModule_tx(ctx, tx, "relation", moduleId, idsKeep)
if err != nil {
return err
}
for _, id := range idsDelete {
log.Info("transfer", fmt.Sprintf("del relation %s", id.String()))
- if err := relation.Del_tx(tx, id); err != nil {
+ if err := relation.Del_tx(ctx, tx, id); err != nil {
return err
}
}
return nil
}
-func deleteRelationPgIndexes_tx(tx pgx.Tx, moduleId uuid.UUID, relations []types.Relation) error {
+func deleteRelationPgIndexes_tx(ctx context.Context, tx pgx.Tx, moduleId uuid.UUID, relations []types.Relation) error {
idsKeep := make([]uuid.UUID, 0)
for _, rel := range relations {
for _, ind := range rel.Indexes {
idsKeep = append(idsKeep, ind.Id)
}
}
- idsDelete, err := importGetIdsToDeleteFromRelation_tx(tx, "pg_index", moduleId, idsKeep)
+ idsDelete, err := importGetIdsToDeleteFromRelation_tx(ctx, tx, "pg_index", moduleId, idsKeep)
if err != nil {
return err
}
for _, id := range idsDelete {
log.Info("transfer", fmt.Sprintf("del PG index %s", id.String()))
- if err := pgIndex.Del_tx(tx, id); err != nil {
+ if err := pgIndex.Del_tx(ctx, tx, id); err != nil {
return err
}
}
return nil
}
-func deleteRelationAttributes_tx(tx pgx.Tx, moduleId uuid.UUID, relations []types.Relation) error {
+func deleteRelationAttributes_tx(ctx context.Context, tx pgx.Tx, moduleId uuid.UUID, relations []types.Relation) error {
idsKeep := make([]uuid.UUID, 0)
for _, rel := range relations {
for _, atr := range rel.Attributes {
idsKeep = append(idsKeep, atr.Id)
}
}
- idsDelete, err := importGetIdsToDeleteFromRelation_tx(tx, "attribute", moduleId, idsKeep)
+ idsDelete, err := importGetIdsToDeleteFromRelation_tx(ctx, tx, "attribute", moduleId, idsKeep)
if err != nil {
return err
}
for _, id := range idsDelete {
log.Info("transfer", fmt.Sprintf("del attribute %s", id.String()))
- if err := attribute.Del_tx(tx, id); err != nil {
+ if err := attribute.Del_tx(ctx, tx, id); err != nil {
return err
}
}
return nil
}
-func deleteRelationPresets_tx(tx pgx.Tx, moduleId uuid.UUID, relations []types.Relation) error {
+func deleteRelationPresets_tx(ctx context.Context, tx pgx.Tx, moduleId uuid.UUID, relations []types.Relation) error {
idsKeep := make([]uuid.UUID, 0)
for _, rel := range relations {
for _, pre := range rel.Presets {
idsKeep = append(idsKeep, pre.Id)
}
}
- idsDelete, err := importGetIdsToDeleteFromRelation_tx(tx, "preset", moduleId, idsKeep)
+ idsDelete, err := importGetIdsToDeleteFromRelation_tx(ctx, tx, "preset", moduleId, idsKeep)
if err != nil {
return err
}
for _, id := range idsDelete {
log.Info("transfer", fmt.Sprintf("del preset %s", id.String()))
- if err := preset.Del_tx(tx, id); err != nil {
+ if err := preset.Del_tx(ctx, tx, id); err != nil {
return err
}
}
return nil
}
-func deleteCollections_tx(tx pgx.Tx, moduleId uuid.UUID, collections []types.Collection) error {
+func deleteCollections_tx(ctx context.Context, tx pgx.Tx, moduleId uuid.UUID, collections []types.Collection) error {
idsKeep := make([]uuid.UUID, 0)
for _, col := range collections {
idsKeep = append(idsKeep, col.Id)
}
- idsDelete, err := importGetIdsToDeleteFromModule_tx(tx, "collection", moduleId, idsKeep)
+ idsDelete, err := importGetIdsToDeleteFromModule_tx(ctx, tx, "collection", moduleId, idsKeep)
if err != nil {
return err
}
for _, id := range idsDelete {
log.Info("transfer", fmt.Sprintf("del collection %s", id.String()))
- if err := collection.Del_tx(tx, id); err != nil {
+ if err := collection.Del_tx(ctx, tx, id); err != nil {
return err
}
}
return nil
}
-func deleteRoles_tx(tx pgx.Tx, moduleId uuid.UUID, roles []types.Role) error {
+func deleteRoles_tx(ctx context.Context, tx pgx.Tx, moduleId uuid.UUID, roles []types.Role) error {
idsKeep := make([]uuid.UUID, 0)
for _, entity := range roles {
idsKeep = append(idsKeep, entity.Id)
}
- idsDelete, err := importGetIdsToDeleteFromModule_tx(tx, "role", moduleId, idsKeep)
+ idsDelete, err := importGetIdsToDeleteFromModule_tx(ctx, tx, "role", moduleId, idsKeep)
if err != nil {
return err
}
for _, id := range idsDelete {
log.Info("transfer", fmt.Sprintf("del role %s", id.String()))
- if err := role.Del_tx(tx, id); err != nil {
+ if err := role.Del_tx(ctx, tx, id); err != nil {
return err
}
}
return nil
}
-func deleteMenus_tx(tx pgx.Tx, moduleId uuid.UUID, menus []types.Menu) error {
+func deleteMenuTabs_tx(ctx context.Context, tx pgx.Tx, moduleId uuid.UUID, menuTabs []types.MenuTab) error {
idsKeep := make([]uuid.UUID, 0)
- var menuNestedParse func(items []types.Menu)
- menuNestedParse = func(items []types.Menu) {
- for _, m := range items {
- idsKeep = append(idsKeep, m.Id)
- menuNestedParse(m.Menus)
- }
+ for _, mt := range menuTabs {
+ idsKeep = append(idsKeep, mt.Id)
}
- menuNestedParse(menus)
-
- idsDelete, err := importGetIdsToDeleteFromModule_tx(tx, "menu", moduleId, idsKeep)
+ idsDelete, err := importGetIdsToDeleteFromModule_tx(ctx, tx, "menu_tab", moduleId, idsKeep)
if err != nil {
return err
}
for _, id := range idsDelete {
- log.Info("transfer", fmt.Sprintf("del menu %s", id.String()))
- if err := menu.Del_tx(tx, id); err != nil {
+ log.Info("transfer", fmt.Sprintf("del menu tab %s", id.String()))
+ if err := menuTab.Del_tx(ctx, tx, id); err != nil {
return err
}
}
return nil
}
-func deleteForms_tx(tx pgx.Tx, moduleId uuid.UUID, forms []types.Form) error {
+func deleteForms_tx(ctx context.Context, tx pgx.Tx, moduleId uuid.UUID, forms []types.Form) error {
idsKeep := make([]uuid.UUID, 0)
for _, entity := range forms {
idsKeep = append(idsKeep, entity.Id)
}
- idsDelete, err := importGetIdsToDeleteFromModule_tx(tx, "form", moduleId, idsKeep)
+ idsDelete, err := importGetIdsToDeleteFromModule_tx(ctx, tx, "form", moduleId, idsKeep)
if err != nil {
return err
}
for _, id := range idsDelete {
log.Info("transfer", fmt.Sprintf("del form %s", id.String()))
- if err := form.Del_tx(tx, id); err != nil {
+ if err := form.Del_tx(ctx, tx, id); err != nil {
return err
}
}
// fields, includes/cascades columns & tabs
for _, entity := range forms {
- if err := deleteFormFields_tx(tx, moduleId, entity); err != nil {
+ if err := deleteFormFields_tx(ctx, tx, entity); err != nil {
return err
}
}
return nil
}
-func deleteFormFields_tx(tx pgx.Tx, moduleId uuid.UUID, form types.Form) error {
+func deleteFormFields_tx(ctx context.Context, tx pgx.Tx, form types.Form) error {
var err error
idsKeepFields := make([]uuid.UUID, 0)
idsKeepColumns := make([]uuid.UUID, 0)
@@ -356,6 +372,15 @@ func deleteFormFields_tx(tx pgx.Tx, moduleId uuid.UUID, form types.Form) error {
idsKeepColumns = append(idsKeepColumns, column.Id)
}
+ case "kanban":
+ var fieldKanban types.FieldKanban
+ if err := json.Unmarshal(fieldJson, &fieldKanban); err != nil {
+ return err
+ }
+ for _, column := range fieldKanban.Columns {
+ idsKeepColumns = append(idsKeepColumns, column.Id)
+ }
+
case "list":
var fieldList types.FieldList
if err := json.Unmarshal(fieldJson, &fieldList); err != nil {
@@ -377,6 +402,15 @@ func deleteFormFields_tx(tx pgx.Tx, moduleId uuid.UUID, form types.Form) error {
return err
}
}
+
+ case "variable":
+ var fieldVar types.FieldVariable
+ if err := json.Unmarshal(fieldJson, &fieldVar); err != nil {
+ return err
+ }
+ for _, column := range fieldVar.Columns {
+ idsKeepColumns = append(idsKeepColumns, column.Id)
+ }
}
}
return nil
@@ -388,105 +422,173 @@ func deleteFormFields_tx(tx pgx.Tx, moduleId uuid.UUID, form types.Form) error {
}
// delete fields
- idsDelete, err = importGetIdsToDeleteFromForm_tx(tx, "field", form.Id, idsKeepFields)
+ idsDelete, err = importGetIdsToDeleteFromForm_tx(ctx, tx, "field", form.Id, idsKeepFields)
if err != nil {
return err
}
for _, id := range idsDelete {
log.Info("transfer", fmt.Sprintf("del field %s", id.String()))
- if err := field.Del_tx(tx, id); err != nil {
+ if err := field.Del_tx(ctx, tx, id); err != nil {
return err
}
}
// delete tabs
- idsDelete, err = importGetIdsToDeleteFromField_tx(tx, "tab", form.Id, idsKeepTabs)
+ idsDelete, err = importGetIdsToDeleteFromField_tx(ctx, tx, "tab", form.Id, idsKeepTabs)
if err != nil {
return err
}
for _, id := range idsDelete {
log.Info("transfer", fmt.Sprintf("del tab %s", id.String()))
- if err := tab.Del_tx(tx, id); err != nil {
+ if err := tab.Del_tx(ctx, tx, id); err != nil {
return err
}
}
// delete columns
- idsDelete, err = importGetIdsToDeleteFromField_tx(tx, "column", form.Id, idsKeepColumns)
+ idsDelete, err = importGetIdsToDeleteFromField_tx(ctx, tx, "column", form.Id, idsKeepColumns)
if err != nil {
return err
}
for _, id := range idsDelete {
log.Info("transfer", fmt.Sprintf("del column %s", id.String()))
- if err := column.Del_tx(tx, id); err != nil {
+ if err := column.Del_tx(ctx, tx, id); err != nil {
return err
}
}
return nil
}
-func deleteIcons_tx(tx pgx.Tx, moduleId uuid.UUID, icons []types.Icon) error {
+func deleteIcons_tx(ctx context.Context, tx pgx.Tx, moduleId uuid.UUID, icons []types.Icon) error {
idsKeep := make([]uuid.UUID, 0)
for _, entity := range icons {
idsKeep = append(idsKeep, entity.Id)
}
- idsDelete, err := importGetIdsToDeleteFromModule_tx(tx, "icon", moduleId, idsKeep)
+ idsDelete, err := importGetIdsToDeleteFromModule_tx(ctx, tx, "icon", moduleId, idsKeep)
if err != nil {
return err
}
for _, id := range idsDelete {
log.Info("transfer", fmt.Sprintf("del icon %s", id.String()))
- if err := icon.Del_tx(tx, id); err != nil {
+ if err := icon.Del_tx(ctx, tx, id); err != nil {
return err
}
}
return nil
}
-func deleteArticles_tx(tx pgx.Tx, moduleId uuid.UUID, articles []types.Article) error {
+func deleteArticles_tx(ctx context.Context, tx pgx.Tx, moduleId uuid.UUID, articles []types.Article) error {
idsKeep := make([]uuid.UUID, 0)
for _, entity := range articles {
idsKeep = append(idsKeep, entity.Id)
}
- idsDelete, err := importGetIdsToDeleteFromModule_tx(tx, "article", moduleId, idsKeep)
+ idsDelete, err := importGetIdsToDeleteFromModule_tx(ctx, tx, "article", moduleId, idsKeep)
if err != nil {
return err
}
for _, id := range idsDelete {
log.Info("transfer", fmt.Sprintf("del article %s", id.String()))
- if err := article.Del_tx(tx, id); err != nil {
+ if err := article.Del_tx(ctx, tx, id); err != nil {
+ return err
+ }
+ }
+ return nil
+}
+func deleteApis_tx(ctx context.Context, tx pgx.Tx, moduleId uuid.UUID, apis []types.Api) error {
+ idsKeep := make([]uuid.UUID, 0)
+ for _, entity := range apis {
+ idsKeep = append(idsKeep, entity.Id)
+ }
+ idsDelete, err := importGetIdsToDeleteFromModule_tx(ctx, tx, "api", moduleId, idsKeep)
+ if err != nil {
+ return err
+ }
+ for _, id := range idsDelete {
+ log.Info("transfer", fmt.Sprintf("del API %s", id.String()))
+ if err := api.Del_tx(ctx, tx, id); err != nil {
return err
}
}
return nil
}
-func deletePgFunctions_tx(tx pgx.Tx, moduleId uuid.UUID, pgFunctions []types.PgFunction) error {
+func deleteClientEvents_tx(ctx context.Context, tx pgx.Tx, moduleId uuid.UUID, clientEvents []types.ClientEvent) error {
+ idsKeep := make([]uuid.UUID, 0)
+ for _, entity := range clientEvents {
+ idsKeep = append(idsKeep, entity.Id)
+ }
+ idsDelete, err := importGetIdsToDeleteFromModule_tx(ctx, tx, "client_event", moduleId, idsKeep)
+ if err != nil {
+ return err
+ }
+ for _, id := range idsDelete {
+ log.Info("transfer", fmt.Sprintf("del client event %s", id.String()))
+ if err := clientEvent.Del_tx(ctx, tx, id); err != nil {
+ return err
+ }
+ }
+ return nil
+}
+func deleteVariables_tx(ctx context.Context, tx pgx.Tx, moduleId uuid.UUID, variables []types.Variable) error {
+ idsKeep := make([]uuid.UUID, 0)
+ for _, entity := range variables {
+ idsKeep = append(idsKeep, entity.Id)
+ }
+ idsDelete, err := importGetIdsToDeleteFromModule_tx(ctx, tx, "variable", moduleId, idsKeep)
+ if err != nil {
+ return err
+ }
+ for _, id := range idsDelete {
+ log.Info("transfer", fmt.Sprintf("del variable %s", id.String()))
+ if err := variable.Del_tx(ctx, tx, id); err != nil {
+ return err
+ }
+ }
+ return nil
+}
+func deleteWidgets_tx(ctx context.Context, tx pgx.Tx, moduleId uuid.UUID, widgets []types.Widget) error {
+ idsKeep := make([]uuid.UUID, 0)
+ for _, entity := range widgets {
+ idsKeep = append(idsKeep, entity.Id)
+ }
+ idsDelete, err := importGetIdsToDeleteFromModule_tx(ctx, tx, "widget", moduleId, idsKeep)
+ if err != nil {
+ return err
+ }
+ for _, id := range idsDelete {
+ log.Info("transfer", fmt.Sprintf("del widget %s", id.String()))
+ if err := widget.Del_tx(ctx, tx, id); err != nil {
+ return err
+ }
+ }
+ return nil
+}
+func deletePgFunctions_tx(ctx context.Context, tx pgx.Tx, moduleId uuid.UUID, pgFunctions []types.PgFunction) error {
idsKeep := make([]uuid.UUID, 0)
for _, entity := range pgFunctions {
idsKeep = append(idsKeep, entity.Id)
}
- idsDelete, err := importGetIdsToDeleteFromModule_tx(tx, "pg_function", moduleId, idsKeep)
+ idsDelete, err := importGetIdsToDeleteFromModule_tx(ctx, tx, "pg_function", moduleId, idsKeep)
if err != nil {
return err
}
for _, id := range idsDelete {
log.Info("transfer", fmt.Sprintf("del PG function %s", id.String()))
- if err := pgFunction.Del_tx(tx, id); err != nil {
+ if err := pgFunction.Del_tx(ctx, tx, id); err != nil {
return err
}
}
return nil
}
-func deleteJsFunctions_tx(tx pgx.Tx, moduleId uuid.UUID, jsFunctions []types.JsFunction) error {
+func deleteJsFunctions_tx(ctx context.Context, tx pgx.Tx, moduleId uuid.UUID, jsFunctions []types.JsFunction) error {
idsKeep := make([]uuid.UUID, 0)
for _, entity := range jsFunctions {
idsKeep = append(idsKeep, entity.Id)
}
- idsDelete, err := importGetIdsToDeleteFromModule_tx(tx, "js_function", moduleId, idsKeep)
+ idsDelete, err := importGetIdsToDeleteFromModule_tx(ctx, tx, "js_function", moduleId, idsKeep)
if err != nil {
return err
}
for _, id := range idsDelete {
log.Info("transfer", fmt.Sprintf("del JS function %s", id.String()))
- if err := jsFunction.Del_tx(tx, id); err != nil {
+ if err := jsFunction.Del_tx(ctx, tx, id); err != nil {
return err
}
}
@@ -494,18 +596,19 @@ func deleteJsFunctions_tx(tx pgx.Tx, moduleId uuid.UUID, jsFunctions []types.JsF
}
// lookups
-func importGetIdsToDeleteFromModule_tx(tx pgx.Tx, entity string,
+func importGetIdsToDeleteFromModule_tx(ctx context.Context, tx pgx.Tx, entity string,
moduleId uuid.UUID, idsKeep []uuid.UUID) ([]uuid.UUID, error) {
idsDelete := make([]uuid.UUID, 0)
- if !tools.StringInSlice(entity, []string{"article", "collection", "form", "icon",
- "js_function", "login_form", "menu", "pg_function", "relation", "role"}) {
+ if !slices.Contains([]string{"api", "article", "client_event", "collection",
+ "form", "icon", "js_function", "login_form", "menu", "menu_tab", "pg_function",
+ "pg_trigger", "relation", "role", "variable", "widget"}, entity) {
return idsDelete, errors.New("unsupported type for delete check")
}
- err := tx.QueryRow(db.Ctx, fmt.Sprintf(`
+ err := tx.QueryRow(ctx, fmt.Sprintf(`
SELECT ARRAY_AGG(id)
FROM app.%s
WHERE id <> ALL($1)
@@ -517,16 +620,16 @@ func importGetIdsToDeleteFromModule_tx(tx pgx.Tx, entity string,
}
return idsDelete, nil
}
-func importGetIdsToDeleteFromRelation_tx(tx pgx.Tx, entity string, moduleId uuid.UUID,
- idsKeep []uuid.UUID) ([]uuid.UUID, error) {
+func importGetIdsToDeleteFromRelation_tx(ctx context.Context, tx pgx.Tx, entity string,
+ moduleId uuid.UUID, idsKeep []uuid.UUID) ([]uuid.UUID, error) {
idsDelete := make([]uuid.UUID, 0)
- if !tools.StringInSlice(entity, []string{"attribute", "pg_index", "pg_trigger", "preset"}) {
+ if !slices.Contains([]string{"attribute", "pg_index", "preset"}, entity) {
return idsDelete, errors.New("unsupport type for delete check")
}
- err := tx.QueryRow(db.Ctx, fmt.Sprintf(`
+ err := tx.QueryRow(ctx, fmt.Sprintf(`
SELECT ARRAY_AGG(id)
FROM app.%s
WHERE id <> ALL($1)
@@ -542,16 +645,16 @@ func importGetIdsToDeleteFromRelation_tx(tx pgx.Tx, entity string, moduleId uuid
}
return idsDelete, nil
}
-func importGetIdsToDeleteFromForm_tx(tx pgx.Tx, entity string, formId uuid.UUID,
- idsKeep []uuid.UUID) ([]uuid.UUID, error) {
+func importGetIdsToDeleteFromForm_tx(ctx context.Context, tx pgx.Tx, entity string,
+ formId uuid.UUID, idsKeep []uuid.UUID) ([]uuid.UUID, error) {
idsDelete := make([]uuid.UUID, 0)
- if !tools.StringInSlice(entity, []string{"field"}) {
+ if !slices.Contains([]string{"field"}, entity) {
return idsDelete, errors.New("unsupport type for delete check")
}
- err := tx.QueryRow(db.Ctx, fmt.Sprintf(`
+ err := tx.QueryRow(ctx, fmt.Sprintf(`
SELECT ARRAY_AGG(id)
FROM app.%s
WHERE id <> ALL($1)
@@ -563,16 +666,16 @@ func importGetIdsToDeleteFromForm_tx(tx pgx.Tx, entity string, formId uuid.UUID,
}
return idsDelete, nil
}
-func importGetIdsToDeleteFromField_tx(tx pgx.Tx, entity string, formId uuid.UUID,
- idsKeep []uuid.UUID) ([]uuid.UUID, error) {
+func importGetIdsToDeleteFromField_tx(ctx context.Context, tx pgx.Tx, entity string,
+ formId uuid.UUID, idsKeep []uuid.UUID) ([]uuid.UUID, error) {
idsDelete := make([]uuid.UUID, 0)
- if !tools.StringInSlice(entity, []string{"column", "tab"}) {
+ if !slices.Contains([]string{"column", "tab"}, entity) {
return idsDelete, errors.New("unsupport type for delete check")
}
- err := tx.QueryRow(db.Ctx, fmt.Sprintf(`
+ err := tx.QueryRow(ctx, fmt.Sprintf(`
SELECT ARRAY_AGG(id)
FROM app.%s
WHERE id <> ALL($1)
diff --git a/transfer/transfer_export.go b/transfer/transfer_export.go
index 6af6ccba..04a3d25d 100644
--- a/transfer/transfer_export.go
+++ b/transfer/transfer_export.go
@@ -1,6 +1,7 @@
package transfer
import (
+ "context"
"crypto"
"crypto/rand"
"crypto/rsa"
@@ -15,11 +16,11 @@ import (
"path/filepath"
"r3/cache"
"r3/config"
+ "r3/config/module_meta"
"r3/db"
"r3/log"
- "r3/module_option"
- "r3/tools"
"r3/types"
+ "slices"
"github.com/gofrs/uuid"
"github.com/jackc/pgx/v5"
@@ -27,10 +28,9 @@ import (
// export a module stored as compressed file
// if the exported module had any changes, the module meta (version,
-// dependent app version, release date) will be updated
-func ExportToFile(moduleId uuid.UUID, zipFilePath string) error {
- cache.Schema_mx.RLock()
- defer cache.Schema_mx.RUnlock()
+//
+// dependent app version, release date) will be updated
+func ExportToFile(ctx context.Context, moduleId uuid.UUID, zipFilePath string) error {
log.Info("transfer", fmt.Sprintf("start export for module %s", moduleId))
@@ -38,16 +38,19 @@ func ExportToFile(moduleId uuid.UUID, zipFilePath string) error {
return errors.New("no export key for module signing set")
}
- tx, err := db.Pool.Begin(db.Ctx)
+ tx, err := db.Pool.Begin(ctx)
if err != nil {
return err
}
- defer tx.Rollback(db.Ctx)
+ defer tx.Rollback(ctx)
+
+ cache.Schema_mx.RLock()
+ defer cache.Schema_mx.RUnlock()
// export all modules as JSON files
var moduleJsonPaths []string
var moduleIdsExported []uuid.UUID
- if err := export_tx(tx, moduleId, true, &moduleJsonPaths, &moduleIdsExported); err != nil {
+ if err := export_tx(ctx, tx, moduleId, &moduleJsonPaths, &moduleIdsExported); err != nil {
return err
}
@@ -55,14 +58,13 @@ func ExportToFile(moduleId uuid.UUID, zipFilePath string) error {
if err := writeFilesToZip(zipFilePath, moduleJsonPaths); err != nil {
return err
}
- return tx.Commit(db.Ctx)
+ return tx.Commit(ctx)
}
-func export_tx(tx pgx.Tx, moduleId uuid.UUID, original bool, filePaths *[]string,
- moduleIdsExported *[]uuid.UUID) error {
+func export_tx(ctx context.Context, tx pgx.Tx, moduleId uuid.UUID, filePaths *[]string, moduleIdsExported *[]uuid.UUID) error {
// ignore if already exported (dependent on modules can have similar dependencies)
- if tools.UuidInSlice(moduleId, *moduleIdsExported) {
+ if slices.Contains(*moduleIdsExported, moduleId) {
return nil
}
*moduleIdsExported = append(*moduleIdsExported, moduleId)
@@ -77,18 +79,14 @@ func export_tx(tx pgx.Tx, moduleId uuid.UUID, original bool, filePaths *[]string
// export all modules that this module is dependent on
for _, modId := range file.Content.Module.DependsOn {
- if err := export_tx(tx, modId, false, filePaths, moduleIdsExported); err != nil {
+ if err := export_tx(ctx, tx, modId, filePaths, moduleIdsExported); err != nil {
return err
}
}
// check for ownership
- var isOwner bool
- if err := tx.QueryRow(db.Ctx, `
- SELECT owner
- FROM instance.module_option
- WHERE module_id = $1
- `, moduleId).Scan(&isOwner); err != nil {
+ isOwner, err := module_meta.GetOwner_tx(ctx, tx, moduleId)
+ if err != nil {
return err
}
@@ -110,7 +108,7 @@ func export_tx(tx pgx.Tx, moduleId uuid.UUID, original bool, filePaths *[]string
}
hashed := sha256.Sum256(jsonContent)
hashedStr := base64.URLEncoding.EncodeToString(hashed[:])
- hashedStrEx, err := module_option.GetHashById(moduleId)
+ hashedStrEx, err := module_meta.GetHash_tx(ctx, tx, moduleId)
if err != nil {
return err
}
diff --git a/transfer/transfer_import.go b/transfer/transfer_import.go
index dd0b1deb..905b3733 100644
--- a/transfer/transfer_import.go
+++ b/transfer/transfer_import.go
@@ -1,6 +1,7 @@
package transfer
import (
+ "context"
"encoding/base64"
"encoding/json"
"errors"
@@ -10,19 +11,20 @@ import (
"r3/cache"
"r3/cluster"
"r3/config"
- "r3/db"
+ "r3/config/module_meta"
"r3/log"
- "r3/module_option"
"r3/schema"
"r3/schema/api"
"r3/schema/article"
"r3/schema/attribute"
+ "r3/schema/clientEvent"
"r3/schema/collection"
+ "r3/schema/compatible"
"r3/schema/form"
"r3/schema/icon"
"r3/schema/jsFunction"
"r3/schema/loginForm"
- "r3/schema/menu"
+ "r3/schema/menuTab"
"r3/schema/module"
"r3/schema/pgFunction"
"r3/schema/pgIndex"
@@ -30,30 +32,32 @@ import (
"r3/schema/preset"
"r3/schema/relation"
"r3/schema/role"
+ "r3/schema/variable"
+ "r3/schema/widget"
"r3/tools"
"r3/transfer/transfer_delete"
"r3/types"
+ "slices"
+ "sort"
"strings"
"github.com/gofrs/uuid"
"github.com/jackc/pgx/v5"
- "github.com/jackc/pgx/v5/pgtype"
)
type importMeta struct {
filePath string // path of module import file (decompressed JSON file)
hash string // hash of module content
- isNew bool // module was already in system (upgrade)
+ isNew bool // module was not already in system (is installed not upgraded)
module types.Module // module content
}
// imports extracted modules from given file paths
-func ImportFromFiles(filePathsImport []string) error {
+func ImportFromFiles_tx(ctx context.Context, tx pgx.Tx, filePathsImport []string) error {
Import_mx.Lock()
defer Import_mx.Unlock()
- log.Info("transfer", fmt.Sprintf("start import for modules from file(s): '%s'",
- strings.Join(filePathsImport, "', '")))
+ log.Info("transfer", fmt.Sprintf("start import for modules from file(s): '%s'", strings.Join(filePathsImport, "', '")))
// extract module packages
filePathsModules := make([]string, 0)
@@ -71,19 +75,25 @@ func ImportFromFiles(filePathsImport []string) error {
}
// parse modules from file paths, only modules that need to be imported are returned
- moduleIdMapMeta := make(map[uuid.UUID]importMeta)
- modules, err := parseModulesFromPaths(filePathsModules, moduleIdMapMeta)
+ moduleIdMapImportMeta := make(map[uuid.UUID]importMeta)
+ modules, err := parseModulesFromPaths_tx(ctx, tx, filePathsModules, moduleIdMapImportMeta)
if err != nil {
return err
}
- // import modules
- tx, err := db.Pool.Begin(db.Ctx)
- if err != nil {
- return err
+ // apply compatibility fixes
+ for i := range modules {
+ // fix import < 3.7: move triggers from relations to module
+ modules[i].PgTriggers = compatible.FixPgTriggerLocation(modules[i].PgTriggers, modules[i].Relations)
+
+ // fix import < 3.10: add initial menu tab
+ modules[i].MenuTabs, err = compatible.FixMissingMenuTab(modules[i].Id, modules[i].MenuTabs, modules[i].Menus)
+ if err != nil {
+ return err
+ }
}
- defer tx.Rollback(db.Ctx)
+ // import modules
idMapSkipped := make(map[uuid.UUID]types.Void)
loopsToRun := 10
@@ -116,18 +126,18 @@ func ImportFromFiles(filePathsImport []string) error {
3. delete all other entities after import is done
if other entities rely on deleted states (presets), they are applied on next loop
*/
- if firstRun && !moduleIdMapMeta[m.Id].isNew {
- if err := transfer_delete.NotExistingPgTriggers_tx(tx, m.Id, m.Relations); err != nil {
+ if firstRun && !moduleIdMapImportMeta[m.Id].isNew {
+ if err := transfer_delete.NotExistingPgTriggers_tx(ctx, tx, m.Id, m.PgTriggers); err != nil {
return err
}
}
- if err := importModule_tx(tx, m, firstRun, lastRun, idMapSkipped); err != nil {
+ if err := importModule_tx(ctx, tx, m, firstRun, lastRun, idMapSkipped); err != nil {
return err
}
- if _, exists := idMapSkipped[m.Id]; !exists && !moduleIdMapMeta[m.Id].isNew {
- if err := transfer_delete.NotExisting_tx(tx, m); err != nil {
+ if _, exists := idMapSkipped[m.Id]; !exists && !moduleIdMapImportMeta[m.Id].isNew {
+ if err := transfer_delete.NotExisting_tx(ctx, tx, m); err != nil {
return err
}
}
@@ -138,37 +148,30 @@ func ImportFromFiles(filePathsImport []string) error {
// after all tasks were successful, final checks and clean ups
for _, m := range modules {
- // validate dependency between modules
- log.Info("transfer", fmt.Sprintf("validity check for module '%s', %s", m.Name, m.Id))
- if err := schema.ValidateDependency_tx(tx, m.Id); err != nil {
- return err
- }
-
// set new module hash value in instance
- if err := module_option.SetHashById_tx(tx, m.Id, moduleIdMapMeta[m.Id].hash); err != nil {
+ if err := module_meta.SetHash_tx(ctx, tx, m.Id, moduleIdMapImportMeta[m.Id].hash); err != nil {
return err
}
// move imported module file to transfer path for future exports
- if err := tools.FileMove(moduleIdMapMeta[m.Id].filePath, filepath.Join(
+ if err := tools.FileMove(moduleIdMapImportMeta[m.Id].filePath, filepath.Join(
config.File.Paths.Transfer, getModuleFilename(m.Id)), true); err != nil {
return err
}
}
- log.Info("transfer", "module dependencies were validated successfully")
log.Info("transfer", "module files were moved to transfer path if imported")
- if err := tx.Commit(db.Ctx); err != nil {
- return err
+ // update schema cache
+ moduleIdsUpdated := make([]uuid.UUID, 0)
+ for id, _ := range moduleIdMapImportMeta {
+ moduleIdsUpdated = append(moduleIdsUpdated, id)
}
- log.Info("transfer", "changes were commited successfully")
-
- return cluster.SchemaChangedAll(true, true)
+ return cluster.SchemaChanged_tx(ctx, tx, true, moduleIdsUpdated)
}
-func importModule_tx(tx pgx.Tx, mod types.Module, firstRun bool, lastRun bool,
+func importModule_tx(ctx context.Context, tx pgx.Tx, mod types.Module, firstRun bool, lastRun bool,
idMapSkipped map[uuid.UUID]types.Void) error {
// we use a sensible import order to avoid conflicts but some cannot be avoided:
@@ -178,7 +181,7 @@ func importModule_tx(tx pgx.Tx, mod types.Module, firstRun bool, lastRun bool,
// use import loops to allow for repeated attempts
// module
- run, err := importCheckRunAndSave(tx, firstRun, mod.Id, idMapSkipped)
+ run, err := importCheckRunAndSave(ctx, tx, firstRun, mod.Id, idMapSkipped)
if err != nil {
return err
}
@@ -186,19 +189,14 @@ func importModule_tx(tx pgx.Tx, mod types.Module, firstRun bool, lastRun bool,
log.Info("transfer", fmt.Sprintf("set module '%s' v%d, %s",
mod.Name, mod.ReleaseBuild, mod.Id))
- if err := importCheckResultAndApply(tx, module.Set_tx(tx, mod.Id,
- mod.ParentId, mod.FormId, mod.IconId, mod.Name, mod.Color1,
- mod.Position, mod.LanguageMain, mod.ReleaseBuild,
- mod.ReleaseBuildApp, mod.ReleaseDate, mod.DependsOn, mod.StartForms,
- mod.Languages, mod.ArticleIdsHelp, mod.Captions), mod.Id, idMapSkipped); err != nil {
-
+ if err := importCheckResultAndApply(ctx, tx, module.Set_tx(ctx, tx, mod), mod.Id, idMapSkipped); err != nil {
return err
}
}
// articles
for _, e := range mod.Articles {
- run, err := importCheckRunAndSave(tx, firstRun, e.Id, idMapSkipped)
+ run, err := importCheckRunAndSave(ctx, tx, firstRun, e.Id, idMapSkipped)
if err != nil {
return err
}
@@ -207,7 +205,7 @@ func importModule_tx(tx pgx.Tx, mod types.Module, firstRun bool, lastRun bool,
}
log.Info("transfer", fmt.Sprintf("set article %s", e.Id))
- if err := importCheckResultAndApply(tx, article.Set_tx(tx, e.ModuleId,
+ if err := importCheckResultAndApply(ctx, tx, article.Set_tx(ctx, tx, e.ModuleId,
e.Id, e.Name, e.Captions), e.Id, idMapSkipped); err != nil {
return err
@@ -216,7 +214,7 @@ func importModule_tx(tx pgx.Tx, mod types.Module, firstRun bool, lastRun bool,
// icons
for _, e := range mod.Icons {
- run, err := importCheckRunAndSave(tx, firstRun, e.Id, idMapSkipped)
+ run, err := importCheckRunAndSave(ctx, tx, firstRun, e.Id, idMapSkipped)
if err != nil {
return err
}
@@ -225,7 +223,7 @@ func importModule_tx(tx pgx.Tx, mod types.Module, firstRun bool, lastRun bool,
}
log.Info("transfer", fmt.Sprintf("set icon %s", e.Id))
- if err := importCheckResultAndApply(tx, icon.Set_tx(tx, e.ModuleId,
+ if err := importCheckResultAndApply(ctx, tx, icon.Set_tx(ctx, tx, e.ModuleId,
e.Id, e.Name, e.File, true), e.Id, idMapSkipped); err != nil {
return err
@@ -234,7 +232,7 @@ func importModule_tx(tx pgx.Tx, mod types.Module, firstRun bool, lastRun bool,
// relations
for _, e := range mod.Relations {
- run, err := importCheckRunAndSave(tx, firstRun, e.Id, idMapSkipped)
+ run, err := importCheckRunAndSave(ctx, tx, firstRun, e.Id, idMapSkipped)
if err != nil {
return err
}
@@ -243,28 +241,51 @@ func importModule_tx(tx pgx.Tx, mod types.Module, firstRun bool, lastRun bool,
}
log.Info("transfer", fmt.Sprintf("set relation %s", e.Id))
- if err := importCheckResultAndApply(tx, relation.Set_tx(tx, e), e.Id, idMapSkipped); err != nil {
+ if err := importCheckResultAndApply(ctx, tx, relation.Set_tx(ctx, tx, e), e.Id, idMapSkipped); err != nil {
return err
}
}
- // attributes, refer to relations
+ // primary key attributes
+ // add before other attributes to enable relationships
for _, relation := range mod.Relations {
for _, e := range relation.Attributes {
- run, err := importCheckRunAndSave(tx, firstRun, e.Id, idMapSkipped)
+ if e.Name != schema.PkName {
+ continue
+ }
+
+ run, err := importCheckRunAndSave(ctx, tx, firstRun, e.Id, idMapSkipped)
if err != nil {
return err
}
if !run {
continue
}
- log.Info("transfer", fmt.Sprintf("set attribute %s", e.Id))
+ log.Info("transfer", fmt.Sprintf("set PK attribute %s", e.Id))
+
+ if err := importCheckResultAndApply(ctx, tx, attribute.Set_tx(ctx, tx, e), e.Id, idMapSkipped); err != nil {
+ return err
+ }
+ }
+ }
+
+ // attributes
+ for _, relation := range mod.Relations {
+ for _, e := range relation.Attributes {
+ if e.Name == schema.PkName {
+ continue
+ }
- if err := importCheckResultAndApply(tx, attribute.Set_tx(tx,
- e.RelationId, e.Id, e.RelationshipId, e.IconId, e.Name,
- e.Content, e.ContentUse, e.Length, e.Nullable, e.Encrypted,
- e.Def, e.OnUpdate, e.OnDelete, e.Captions), e.Id, idMapSkipped); err != nil {
+ run, err := importCheckRunAndSave(ctx, tx, firstRun, e.Id, idMapSkipped)
+ if err != nil {
+ return err
+ }
+ if !run {
+ continue
+ }
+ log.Info("transfer", fmt.Sprintf("set attribute %s", e.Id))
+ if err := importCheckResultAndApply(ctx, tx, attribute.Set_tx(ctx, tx, e), e.Id, idMapSkipped); err != nil {
return err
}
}
@@ -272,7 +293,7 @@ func importModule_tx(tx pgx.Tx, mod types.Module, firstRun bool, lastRun bool,
// collections
for _, e := range mod.Collections {
- run, err := importCheckRunAndSave(tx, firstRun, e.Id, idMapSkipped)
+ run, err := importCheckRunAndSave(ctx, tx, firstRun, e.Id, idMapSkipped)
if err != nil {
return err
}
@@ -281,7 +302,7 @@ func importModule_tx(tx pgx.Tx, mod types.Module, firstRun bool, lastRun bool,
}
log.Info("transfer", fmt.Sprintf("set collection %s", e.Id))
- if err := importCheckResultAndApply(tx, collection.Set_tx(tx,
+ if err := importCheckResultAndApply(ctx, tx, collection.Set_tx(ctx, tx,
e.ModuleId, e.Id, e.IconId, e.Name, e.Columns, e.Query, e.InHeader),
e.Id, idMapSkipped); err != nil {
@@ -291,7 +312,7 @@ func importModule_tx(tx pgx.Tx, mod types.Module, firstRun bool, lastRun bool,
// APIs
for _, e := range mod.Apis {
- run, err := importCheckRunAndSave(tx, firstRun, e.Id, idMapSkipped)
+ run, err := importCheckRunAndSave(ctx, tx, firstRun, e.Id, idMapSkipped)
if err != nil {
return err
}
@@ -300,14 +321,46 @@ func importModule_tx(tx pgx.Tx, mod types.Module, firstRun bool, lastRun bool,
}
log.Info("transfer", fmt.Sprintf("set API %s", e.Id))
- if err := importCheckResultAndApply(tx, api.Set_tx(tx, e), e.Id, idMapSkipped); err != nil {
+ if err := importCheckResultAndApply(ctx, tx, api.Set_tx(ctx, tx, e), e.Id, idMapSkipped); err != nil {
+ return err
+ }
+ }
+
+ // variables
+ for _, e := range mod.Variables {
+ run, err := importCheckRunAndSave(ctx, tx, firstRun, e.Id, idMapSkipped)
+ if err != nil {
+ return err
+ }
+ if !run {
+ continue
+ }
+ log.Info("transfer", fmt.Sprintf("set variable %s", e.Id))
+
+ if err := importCheckResultAndApply(ctx, tx, variable.Set_tx(ctx, tx, e), e.Id, idMapSkipped); err != nil {
+ return err
+ }
+ }
+
+ // widgets
+ for _, e := range mod.Widgets {
+ run, err := importCheckRunAndSave(ctx, tx, firstRun, e.Id, idMapSkipped)
+ if err != nil {
+ return err
+ }
+ if !run {
+ continue
+ }
+ log.Info("transfer", fmt.Sprintf("set widget %s", e.Id))
+
+ if err := importCheckResultAndApply(ctx, tx, widget.Set_tx(ctx, tx, e), e.Id, idMapSkipped); err != nil {
return err
}
}
// PG functions, refer to relations/attributes/pg_functions (self reference)
for _, e := range mod.PgFunctions {
- run, err := importCheckRunAndSave(tx, firstRun, e.Id, idMapSkipped)
+ run, err := importCheckRunAndSave(ctx, tx, firstRun, e.Id, idMapSkipped)
if err != nil {
return err
}
@@ -316,41 +369,31 @@ func importModule_tx(tx pgx.Tx, mod types.Module, firstRun bool, lastRun bool,
}
log.Info("transfer", fmt.Sprintf("set PG function %s", e.Id))
- if err := importCheckResultAndApply(tx, pgFunction.Set_tx(tx,
- e.ModuleId, e.Id, e.Name, e.CodeArgs, e.CodeFunction, e.CodeReturns,
- e.IsFrontendExec, e.IsTrigger, e.Schedules, e.Captions),
- e.Id, idMapSkipped); err != nil {
-
+ if err := importCheckResultAndApply(ctx, tx, pgFunction.Set_tx(ctx, tx, e), e.Id, idMapSkipped); err != nil {
return err
}
}
// PG triggers, refer to PG functions
- for _, relation := range mod.Relations {
- for _, e := range relation.Triggers {
- run, err := importCheckRunAndSave(tx, firstRun, e.Id, idMapSkipped)
- if err != nil {
- return err
- }
- if !run {
- continue
- }
- log.Info("transfer", fmt.Sprintf("set trigger %s", e.Id))
-
- if err := importCheckResultAndApply(tx, pgTrigger.Set_tx(tx,
- e.PgFunctionId, e.Id, e.RelationId, e.OnInsert, e.OnUpdate,
- e.OnDelete, e.IsConstraint, e.IsDeferrable, e.IsDeferred,
- e.PerRow, e.Fires, e.CodeCondition), e.Id, idMapSkipped); err != nil {
+ for _, e := range mod.PgTriggers {
+ run, err := importCheckRunAndSave(ctx, tx, firstRun, e.Id, idMapSkipped)
+ if err != nil {
+ return err
+ }
+ if !run {
+ continue
+ }
+ log.Info("transfer", fmt.Sprintf("set trigger %s", e.Id))
- return err
- }
+ if err := importCheckResultAndApply(ctx, tx, pgTrigger.Set_tx(ctx, tx, e), e.Id, idMapSkipped); err != nil {
+ return err
}
}
// PG indexes
for _, relation := range mod.Relations {
for _, e := range relation.Indexes {
- run, err := importCheckRunAndSave(tx, firstRun, e.Id, idMapSkipped)
+ run, err := importCheckRunAndSave(ctx, tx, firstRun, e.Id, idMapSkipped)
if err != nil {
return err
}
@@ -359,7 +402,7 @@ func importModule_tx(tx pgx.Tx, mod types.Module, firstRun bool, lastRun bool,
}
log.Info("transfer", fmt.Sprintf("set index %s", e.Id))
- if err := importCheckResultAndApply(tx, pgIndex.Set_tx(tx, e), e.Id, idMapSkipped); err != nil {
+ if err := importCheckResultAndApply(ctx, tx, pgIndex.Set_tx(ctx, tx, e), e.Id, idMapSkipped); err != nil {
return err
}
}
@@ -367,7 +410,7 @@ func importModule_tx(tx pgx.Tx, mod types.Module, firstRun bool, lastRun bool,
// forms, refer to relations/attributes/collections/JS functions
for _, e := range mod.Forms {
- run, err := importCheckRunAndSave(tx, firstRun, e.Id, idMapSkipped)
+ run, err := importCheckRunAndSave(ctx, tx, firstRun, e.Id, idMapSkipped)
if err != nil {
return err
}
@@ -376,18 +419,14 @@ func importModule_tx(tx pgx.Tx, mod types.Module, firstRun bool, lastRun bool,
}
log.Info("transfer", fmt.Sprintf("set form %s", e.Id))
- if err := importCheckResultAndApply(tx, form.Set_tx(tx,
- e.ModuleId, e.Id, e.PresetIdOpen, e.IconId, e.Name, e.NoDataActions,
- e.Query, e.Fields, e.Functions, e.States, e.ArticleIdsHelp,
- e.Captions), e.Id, idMapSkipped); err != nil {
-
+ if err := importCheckResultAndApply(ctx, tx, form.Set_tx(ctx, tx, e), e.Id, idMapSkipped); err != nil {
return err
}
}
// login forms, refer to forms/attributes
for _, e := range mod.LoginForms {
- run, err := importCheckRunAndSave(tx, firstRun, e.Id, idMapSkipped)
+ run, err := importCheckRunAndSave(ctx, tx, firstRun, e.Id, idMapSkipped)
if err != nil {
return err
}
@@ -396,23 +435,33 @@ func importModule_tx(tx pgx.Tx, mod types.Module, firstRun bool, lastRun bool,
}
log.Info("transfer", fmt.Sprintf("set login form %s", e.Id))
- if err := importCheckResultAndApply(tx, loginForm.Set_tx(
- tx, e.ModuleId, e.Id, e.AttributeIdLogin, e.AttributeIdLookup,
+ if err := importCheckResultAndApply(ctx, tx, loginForm.Set_tx(
+ ctx, tx, e.ModuleId, e.Id, e.AttributeIdLogin, e.AttributeIdLookup,
e.FormId, e.Name, e.Captions), e.Id, idMapSkipped); err != nil {
return err
}
}
- // menus, refer to forms/icons
- log.Info("transfer", "set menus")
- if err := menu.Set_tx(tx, pgtype.UUID{}, mod.Menus); err != nil {
- return err
+ // menu tabs, refer to icons
+ for i, e := range mod.MenuTabs {
+ run, err := importCheckRunAndSave(ctx, tx, firstRun, e.Id, idMapSkipped)
+ if err != nil {
+ return err
+ }
+ if !run {
+ continue
+ }
+ log.Info("transfer", fmt.Sprintf("set menu tab %s", e.Id))
+
+ if err := importCheckResultAndApply(ctx, tx, menuTab.Set_tx(ctx, tx, i, e), e.Id, idMapSkipped); err != nil {
+ return err
+ }
}
// roles, refer to relations/attributes/menu
for _, e := range mod.Roles {
- run, err := importCheckRunAndSave(tx, firstRun, e.Id, idMapSkipped)
+ run, err := importCheckRunAndSave(ctx, tx, firstRun, e.Id, idMapSkipped)
if err != nil {
return err
}
@@ -421,14 +470,14 @@ func importModule_tx(tx pgx.Tx, mod types.Module, firstRun bool, lastRun bool,
}
log.Info("transfer", fmt.Sprintf("set role %s", e.Id))
- if err := importCheckResultAndApply(tx, role.Set_tx(tx, e), e.Id, idMapSkipped); err != nil {
+ if err := importCheckResultAndApply(ctx, tx, role.Set_tx(ctx, tx, e), e.Id, idMapSkipped); err != nil {
return err
}
}
// JS functions, refer to forms/fields/roles/pg_functions/js_functions (self reference)
for _, e := range mod.JsFunctions {
- run, err := importCheckRunAndSave(tx, firstRun, e.Id, idMapSkipped)
+ run, err := importCheckRunAndSave(ctx, tx, firstRun, e.Id, idMapSkipped)
if err != nil {
return err
}
@@ -437,22 +486,36 @@ func importModule_tx(tx pgx.Tx, mod types.Module, firstRun bool, lastRun bool,
}
log.Info("transfer", fmt.Sprintf("set JS function %s", e.Id))
- if err := importCheckResultAndApply(tx, jsFunction.Set_tx(tx,
- e.ModuleId, e.Id, e.FormId, e.Name, e.CodeArgs, e.CodeFunction,
- e.CodeReturns, e.Captions), e.Id, idMapSkipped); err != nil {
+ if err := importCheckResultAndApply(ctx, tx, jsFunction.Set_tx(ctx, tx, e), e.Id, idMapSkipped); err != nil {
+ return err
+ }
+ }
+
+ // client events
+ // refer to JS functions
+ for _, e := range mod.ClientEvents {
+ run, err := importCheckRunAndSave(ctx, tx, firstRun, e.Id, idMapSkipped)
+ if err != nil {
+ return err
+ }
+ if !run {
+ continue
+ }
+ log.Info("transfer", fmt.Sprintf("set client event %s", e.Id))
+ if err := importCheckResultAndApply(ctx, tx, clientEvent.Set_tx(ctx, tx, e), e.Id, idMapSkipped); err != nil {
return err
}
}
// presets, refer to relations/attributes/other presets
- // can fail because deletions happen after import and presets depent on the state of relations/attributes
+ // can fail because deletions happen after import and presets depend on the state of relations/attributes
// which might loose constraints (example: attribute with NOT NULL removed)
// unprotected presets are optional (can be deleted within instance)
- // because of this some preset referals might not work and are ignored
+ // because of this some preset referrals might not work and are ignored
for _, relation := range mod.Relations {
for _, e := range relation.Presets {
- run, err := importCheckRunAndSave(tx, firstRun, e.Id, idMapSkipped)
+ run, err := importCheckRunAndSave(ctx, tx, firstRun, e.Id, idMapSkipped)
if err != nil {
return err
}
@@ -466,15 +529,14 @@ func importModule_tx(tx pgx.Tx, mod types.Module, firstRun bool, lastRun bool,
// if preset itself is unprotected, we try until the last loop and then give up
if lastRun && !e.Protected {
log.Info("transfer", "import failed to resolve unprotected preset until last loop, it will be ignored")
- if err := importCheckResultAndApply(tx, nil, e.Id, idMapSkipped); err != nil {
+ if err := importCheckResultAndApply(ctx, tx, nil, e.Id, idMapSkipped); err != nil {
return err
}
continue
}
- if err := importCheckResultAndApply(tx, preset.Set_tx(tx,
- e.RelationId, e.Id, e.Name, e.Protected, e.Values),
- e.Id, idMapSkipped); err != nil {
+ if err := importCheckResultAndApply(ctx, tx, preset.Set_tx(ctx, tx, e.RelationId,
+ e.Id, e.Name, e.Protected, e.Values), e.Id, idMapSkipped); err != nil {
return err
}
@@ -485,7 +547,7 @@ func importModule_tx(tx pgx.Tx, mod types.Module, firstRun bool, lastRun bool,
// checks if this action needs to run and sets savepoint inside DB transaction if so
// returns true if action needs to run
-func importCheckRunAndSave(tx pgx.Tx, firstRun bool, entityId uuid.UUID,
+func importCheckRunAndSave(ctx context.Context, tx pgx.Tx, firstRun bool, entityId uuid.UUID,
idMapSkipped map[uuid.UUID]types.Void) (bool, error) {
_, skipped := idMapSkipped[entityId]
@@ -495,7 +557,7 @@ func importCheckRunAndSave(tx pgx.Tx, firstRun bool, entityId uuid.UUID,
return false, nil
}
- if _, err := tx.Exec(db.Ctx, `SAVEPOINT transfer_import`); err != nil {
+ if _, err := tx.Exec(ctx, `SAVEPOINT transfer_import`); err != nil {
return false, err
}
return true, nil
@@ -503,11 +565,11 @@ func importCheckRunAndSave(tx pgx.Tx, firstRun bool, entityId uuid.UUID,
// checks if action was successful and releases/rollbacks savepoints accordingly
// stores entity ID in skip map, if unsuccessful
-func importCheckResultAndApply(tx pgx.Tx, resultErr error, entityId uuid.UUID,
+func importCheckResultAndApply(ctx context.Context, tx pgx.Tx, resultErr error, entityId uuid.UUID,
idMapSkipped map[uuid.UUID]types.Void) error {
if resultErr == nil {
- if _, err := tx.Exec(db.Ctx, `RELEASE SAVEPOINT transfer_import`); err != nil {
+ if _, err := tx.Exec(ctx, `RELEASE SAVEPOINT transfer_import`); err != nil {
return err
}
if _, exists := idMapSkipped[entityId]; exists {
@@ -519,14 +581,14 @@ func importCheckResultAndApply(tx pgx.Tx, resultErr error, entityId uuid.UUID,
// error case
log.Info("transfer", fmt.Sprintf("skipped entity on this run, error: %s", resultErr))
- if _, err := tx.Exec(db.Ctx, `ROLLBACK TO SAVEPOINT transfer_import`); err != nil {
+ if _, err := tx.Exec(ctx, `ROLLBACK TO SAVEPOINT transfer_import`); err != nil {
return err
}
idMapSkipped[entityId] = types.Void{}
return nil
}
-func parseModulesFromPaths(filePaths []string, moduleIdMapMeta map[uuid.UUID]importMeta) ([]types.Module, error) {
+func parseModulesFromPaths_tx(ctx context.Context, tx pgx.Tx, filePaths []string, moduleIdMapImportMeta map[uuid.UUID]importMeta) ([]types.Module, error) {
cache.Schema_mx.RLock()
defer cache.Schema_mx.RUnlock()
@@ -558,8 +620,8 @@ func parseModulesFromPaths(filePaths []string, moduleIdMapMeta map[uuid.UUID]imp
log.Info("transfer", fmt.Sprintf("import is validating module '%s' v%d",
fileData.Content.Module.Name, fileData.Content.Module.ReleaseBuild))
- // verify application compatability
- if err := verifyCompatibilityWithApp(moduleId, fileData.Content.Module.ReleaseBuildApp); err != nil {
+ // verify application compatibility
+ if err := verifyCompatibilityWithApp(fileData.Content.Module.ReleaseBuildApp); err != nil {
return modules, err
}
@@ -579,7 +641,7 @@ func parseModulesFromPaths(filePaths []string, moduleIdMapMeta map[uuid.UUID]imp
}
// check whether installed module hash changed at all
- hashedStrEx, err := module_option.GetHashById(moduleId)
+ hashedStrEx, err := module_meta.GetHash_tx(ctx, tx, moduleId)
if err != nil {
return modules, err
}
@@ -593,10 +655,10 @@ func parseModulesFromPaths(filePaths []string, moduleIdMapMeta map[uuid.UUID]imp
}
// check whether module was added previously (multiple import files used with similar modules)
- if _, exists := moduleIdMapMeta[moduleId]; exists {
- if moduleIdMapMeta[moduleId].module.ReleaseBuild >= fileData.Content.Module.ReleaseBuild {
+ if _, exists := moduleIdMapImportMeta[moduleId]; exists {
+ if moduleIdMapImportMeta[moduleId].module.ReleaseBuild >= fileData.Content.Module.ReleaseBuild {
log.Info("transfer", fmt.Sprintf("import of module '%s' not required, same or newer version (%d -> %d) to be added",
- fileData.Content.Module.Name, moduleIdMapMeta[moduleId].module.ReleaseBuild,
+ fileData.Content.Module.Name, moduleIdMapImportMeta[moduleId].module.ReleaseBuild,
fileData.Content.Module.ReleaseBuild))
continue
@@ -606,7 +668,7 @@ func parseModulesFromPaths(filePaths []string, moduleIdMapMeta map[uuid.UUID]imp
log.Info("transfer", fmt.Sprintf("import will install module '%s' v%d",
fileData.Content.Module.Name, fileData.Content.Module.ReleaseBuild))
- moduleIdMapMeta[moduleId] = importMeta{
+ moduleIdMapImportMeta[moduleId] = importMeta{
filePath: filePath,
hash: hashedStr,
isNew: !isModuleUpgrade,
@@ -614,42 +676,46 @@ func parseModulesFromPaths(filePaths []string, moduleIdMapMeta map[uuid.UUID]imp
}
}
- // add modules following optimized import order
+ // return modules in optimized import order
+ // add modules in order of least dependencies
moduleIdsAdded := make([]uuid.UUID, 0)
+ moduleIdsSort := make([]uuid.UUID, 0)
+ moduleNames := make([]string, 0)
- var addModule func(m types.Module)
- addModule = func(m types.Module) {
-
- if tools.UuidInSlice(m.Id, moduleIdsAdded) {
+ for id, _ := range moduleIdMapImportMeta {
+ moduleIdsSort = append(moduleIdsSort, id)
+ }
+ sort.SliceStable(moduleIdsSort, func(i, j int) bool {
+ return len(moduleIdMapImportMeta[moduleIdsSort[i]].module.DependsOn) <
+ len(moduleIdMapImportMeta[moduleIdsSort[j]].module.DependsOn)
+ })
+
+ // finalize import order
+ var addModule func(id uuid.UUID)
+ addModule = func(id uuid.UUID) {
+ if slices.Contains(moduleIdsAdded, id) {
return
}
- // add itself before dependencies (avoids infinite loops from circular dependencies)
- modules = append(modules, m)
- moduleIdsAdded = append(moduleIdsAdded, m.Id)
-
- // add dependencies
- for _, dependId := range m.DependsOn {
+ // add ID before dependencies to avoid circular references
+ moduleIdsAdded = append(moduleIdsAdded, id)
- if _, exists := moduleIdMapMeta[dependId]; !exists {
- // dependency was not included or is not needed
- continue
+ // dependencies are imported first
+ for _, dependId := range moduleIdMapImportMeta[id].module.DependsOn {
+ if _, exists := moduleIdMapImportMeta[dependId]; exists {
+ addModule(dependId)
}
- addModule(moduleIdMapMeta[dependId].module)
}
- }
- for _, meta := range moduleIdMapMeta {
- addModule(meta.module)
+ modules = append(modules, moduleIdMapImportMeta[id].module)
+ moduleNames = append(moduleNames, moduleIdMapImportMeta[id].module.Name)
}
-
- // log chosen installation order
- logNames := make([]string, len(modules))
- for i, m := range modules {
- logNames[i] = m.Name
+ for _, id := range moduleIdsSort {
+ addModule(id)
}
+
log.Info("transfer", fmt.Sprintf("import has decided on installation order: %s",
- strings.Join(logNames, ", ")))
+ strings.Join(moduleNames, ", ")))
return modules, nil
}
diff --git a/types/types.go b/types/types.go
index b8be990c..fdf07bee 100644
--- a/types/types.go
+++ b/types/types.go
@@ -1,3 +1,10 @@
package types
type Void struct{}
+
+type SystemMsg struct {
+ Date0 uint64 `json:"date0"`
+ Date1 uint64 `json:"date1"`
+ Maintenance bool `json:"maintenance"`
+ Text string `json:"text"`
+}
diff --git a/types/types_admin.go b/types/types_admin.go
index 2afb933c..f83be808 100644
--- a/types/types_admin.go
+++ b/types/types_admin.go
@@ -24,16 +24,19 @@ type Log struct {
}
type LoginAdmin struct {
- Id int64 `json:"id"`
- LdapId pgtype.Int4 `json:"ldapId"`
- LdapKey pgtype.Text `json:"ldapKey"`
- Name string `json:"name"`
- Active bool `json:"active"`
- Admin bool `json:"admin"`
- NoAuth bool `json:"noAuth"`
- LanguageCode string `json:"languageCode"`
- Records []LoginAdminRecord `json:"records"`
- RoleIds []uuid.UUID `json:"roleIds"`
+ Id int64 `json:"id"`
+ LdapId pgtype.Int4 `json:"ldapId"`
+ LdapKey pgtype.Text `json:"ldapKey"`
+ Name string `json:"name"`
+ Active bool `json:"active"`
+ Admin bool `json:"admin"`
+ Meta LoginMeta `json:"meta"`
+ NoAuth bool `json:"noAuth"`
+ LanguageCode string `json:"languageCode"`
+ Limited bool `json:"limited"`
+ TokenExpiryHours pgtype.Int4 `json:"tokenExpiryHours"`
+ Records []LoginAdminRecord `json:"records"`
+ RoleIds []uuid.UUID `json:"roleIds"`
}
type LoginAdminRecord struct {
Id pgtype.Int8 `json:"id"` // record ID
@@ -55,27 +58,38 @@ type LoginTemplateAdmin struct {
}
type Ldap struct {
- Id int32 `json:"id"`
- LoginTemplateId pgtype.Int8 `json:"loginTemplateId"` // template for new logins (applies login settings)
- Name string `json:"name"`
- Host string `json:"host"`
- Port int `json:"port"`
- BindUserDn string `json:"bindUserDn"` // DN of bind user, example: 'CN=readonly,OU=User,DC=test,DC=local'
- BindUserPw string `json:"bindUserPw"` // password of bind user in clear text
- SearchClass string `json:"searchClass"` // object class to filter to, example: '(&(objectClass=user))'
- SearchDn string `json:"searchDn"` // root search DN, example: 'OU=User,DC=test,DC=local'
- KeyAttribute string `json:"keyAttribute"` // name of attribute used as key, example: 'objectGUID'
- LoginAttribute string `json:"loginAttribute"` // name of attribute used as login, example: 'sAMAccountName'
- MemberAttribute string `json:"memberAttribute"` // name of attribute used as membership, example: 'memberOf'
- AssignRoles bool `json:"assignRoles"` // assign roles from group membership (see member attribute)
- MsAdExt bool `json:"msAdExt"` // Microsoft AD extensions (nested group memberships, user account control)
- Starttls bool `json:"starttls"` // upgrade unencrypted LDAP connection with TLS (STARTTLS)
- Tls bool `json:"tls"` // connect to LDAP via SSL/TLS (LDAPS)
- TlsVerify bool `json:"tlsVerify"` // verify TLS connection, can be used to allow non-trusted certificates
- Roles []LdapRole `json:"roles"`
+ Id int32 `json:"id"`
+ LoginTemplateId pgtype.Int8 `json:"loginTemplateId"` // template for new logins (applies login settings)
+ Name string `json:"name"`
+ Host string `json:"host"`
+ Port int `json:"port"`
+ BindUserDn string `json:"bindUserDn"` // DN of bind user, example: 'CN=readonly,OU=User,DC=test,DC=local'
+ BindUserPw string `json:"bindUserPw"` // password of bind user in clear text
+ SearchClass string `json:"searchClass"` // object class to filter to, example: '(&(objectClass=user))'
+ SearchDn string `json:"searchDn"` // root search DN, example: 'OU=User,DC=test,DC=local'
+ KeyAttribute string `json:"keyAttribute"` // name of attribute used as key, example: 'objectGUID'
+ LoginAttribute string `json:"loginAttribute"` // name of attribute used as login, example: 'sAMAccountName'
+ MemberAttribute string `json:"memberAttribute"` // name of attribute used as membership, example: 'memberOf'
+ LoginMetaAttributes LoginMeta `json:"loginMetaAttributes"` // names of attributes used for login meta data
+ AssignRoles bool `json:"assignRoles"` // assign roles from group membership (see member attribute)
+ MsAdExt bool `json:"msAdExt"` // Microsoft AD extensions (nested group memberships, user account control)
+ Starttls bool `json:"starttls"` // upgrade unencrypted LDAP connection with TLS (STARTTLS)
+ Tls bool `json:"tls"` // connect to LDAP via SSL/TLS (LDAPS)
+ TlsVerify bool `json:"tlsVerify"` // verify TLS connection, can be used to allow non-trusted certificates
+ Roles []LdapRole `json:"roles"`
}
type LdapRole struct {
LdapId int32 `json:"ldapId"`
RoleId uuid.UUID `json:"roleId"`
GroupDn string `json:"groupDn"`
}
+type OauthClient struct {
+ Id int32 `json:"id"`
+ Name string `json:"name"`
+ ClientId string `json:"clientId"`
+ ClientSecret string `json:"clientSecret"`
+ DateExpiry int64 `json:"dateExpiry"`
+ Scopes []string `json:"scopes"`
+ Tenant string `json:"tenant"`
+ TokenUrl string `json:"tokenUrl"`
+}
diff --git a/types/types_caption.go b/types/types_caption.go
new file mode 100644
index 00000000..902ab01b
--- /dev/null
+++ b/types/types_caption.go
@@ -0,0 +1,23 @@
+package types
+
+import "github.com/gofrs/uuid"
+
+type CaptionMapsAll struct {
+ ArticleIdMap map[uuid.UUID]CaptionMap `json:"articleIdMap"`
+ AttributeIdMap map[uuid.UUID]CaptionMap `json:"attributeIdMap"`
+ ClientEventIdMap map[uuid.UUID]CaptionMap `json:"clientEventIdMap"`
+ ColumnIdMap map[uuid.UUID]CaptionMap `json:"columnIdMap"`
+ FieldIdMap map[uuid.UUID]CaptionMap `json:"fieldIdMap"`
+ FormIdMap map[uuid.UUID]CaptionMap `json:"formIdMap"`
+ FormActionIdMap map[uuid.UUID]CaptionMap `json:"formActionIdMap"`
+ JsFunctionIdMap map[uuid.UUID]CaptionMap `json:"jsFunctionIdMap"`
+ LoginFormIdMap map[uuid.UUID]CaptionMap `json:"loginFormIdMap"`
+ MenuIdMap map[uuid.UUID]CaptionMap `json:"menuIdMap"`
+ MenuTabIdMap map[uuid.UUID]CaptionMap `json:"menuTabIdMap"`
+ ModuleIdMap map[uuid.UUID]CaptionMap `json:"moduleIdMap"`
+ PgFunctionIdMap map[uuid.UUID]CaptionMap `json:"pgFunctionIdMap"`
+ QueryChoiceIdMap map[uuid.UUID]CaptionMap `json:"queryChoiceIdMap"`
+ RoleIdMap map[uuid.UUID]CaptionMap `json:"roleIdMap"`
+ TabIdMap map[uuid.UUID]CaptionMap `json:"tabIdMap"`
+ WidgetIdMap map[uuid.UUID]CaptionMap `json:"widgetIdMap"`
+}
diff --git a/types/types_cluster.go b/types/types_cluster.go
index b29b3d15..e3b3f633 100644
--- a/types/types_cluster.go
+++ b/types/types_cluster.go
@@ -1,81 +1,71 @@
package types
-import "github.com/gofrs/uuid"
+import (
+ "github.com/gofrs/uuid"
+)
-type ClusterEvent struct {
- Content string
- Payload []byte
-}
-type ClusterEventCollectionUpdated struct {
- CollectionId uuid.UUID `json:"collectionId"`
- LoginIds []int64 `json:"loginIds"`
-}
-type ClusterEventConfigChanged struct {
- SwitchToMaintenance bool `json:"switchToMaintenance"`
-}
-type ClusterEventLogin struct {
- LoginId int64 `json:"loginId"`
-}
-type ClusterEventMasterAssigned struct {
- State bool `json:"state"`
-}
-type ClusterEventSchemaChanged struct {
- ModuleIdsUpdateOnly []uuid.UUID `json:"moduleIdsUpdateOnly"`
- NewVersion bool `json:"newVersion"`
-}
-type ClusterEventTaskTriggered struct {
- PgFunctionId uuid.UUID `json:"pgFunctionId"`
- PgFunctionScheduleId uuid.UUID `json:"pgFunctionScheduleId"`
- TaskName string `json:"taskName"`
+// cluster node
+type ClusterNode struct {
+ ClusterMaster bool `json:"clusterMaster"`
+ DateCheckIn int64 `json:"dateCheckIn"`
+ DateStarted int64 `json:"dateStarted"`
+ Hostname string `json:"hostname"`
+ Id uuid.UUID `json:"id"`
+ Name string `json:"name"`
+ Running bool `json:"running"`
+ StatMemory int64 `json:"statMemory"`
}
+
+// cluster event payloads
type ClusterEventFilesCopied struct {
- LoginId int64 `json:"loginId"`
AttributeId uuid.UUID `json:"attributeId"`
FileIds []uuid.UUID `json:"fileIds"`
RecordId int64 `json:"recordId"`
}
type ClusterEventFileRequested struct {
- LoginId int64 `json:"loginId"`
AttributeId uuid.UUID `json:"attributeId"`
ChooseApp bool `json:"chooseApp"`
FileId uuid.UUID `json:"fileId"`
FileHash string `json:"fileHash"`
FileName string `json:"fileName"`
}
-
-type ClusterNode struct {
- ClusterMaster bool `json:"clusterMaster"`
- DateCheckIn int64 `json:"dateCheckIn"`
- DateStarted int64 `json:"dateStarted"`
- Hostname string `json:"hostname"`
- Id uuid.UUID `json:"id"`
- Name string `json:"name"`
- Running bool `json:"running"`
- StatSessions int64 `json:"statSessions"`
- StatMemory int64 `json:"statMemory"`
+type ClusterEventJsFunctionCalled struct {
+ ModuleId uuid.UUID `json:"moduleId"` // module ID that JS function belongs to, relevant for filtering to direct app access
+ JsFunctionId uuid.UUID `json:"jsFunctionId"`
+ Arguments []interface{} `json:"arguments"`
}
-// a server side event, affecting one or many websocket clients (by associated login ID)
-type ClusterWebsocketClientEvent struct {
- LoginId int64 // affected login (0=all logins)
+// cluster event payloads used by instance functions
+type ClusterEventCollectionUpdated struct {
+ // filled by instance.update_collection()
+ CollectionId uuid.UUID `json:"collectionId"`
+ LoginIds []int64 `json:"loginIds"`
+}
+type ClusterEventMasterAssigned struct {
+ // filled by instance_cluster.master_role_request()
+ State bool `json:"state"`
+}
+type ClusterEventTaskTriggered struct {
+ // filled by instance_cluster.run_task()
+ PgFunctionId uuid.UUID `json:"pgFunctionId"`
+ PgFunctionScheduleId uuid.UUID `json:"pgFunctionScheduleId"`
+ TaskName string `json:"taskName"`
+}
- CollectionChanged uuid.UUID // inform client: collection has changed (should update it)
- ConfigChanged bool // system config has changed (only relevant for admins)
- Kick bool // kick login (usually because it was disabled)
- KickNonAdmin bool // kick login if not admin (usually because maintenance mode was enabled)
- Renew bool // renew login (permissions changed)
- SchemaLoading bool // inform client: schema is loading
- SchemaTimestamp int64 // inform client: schema has a new timestamp (new version)
+// cluster event client target filter
+type ClusterEventTarget struct {
+ // strict filters, target must match if filter is defined
+ Address string `json:"address"` // address used to connect via websocket, "" = undefined
+ Device WebsocketClientDevice `json:"device"` // device to affect ("browser", "fatClient"), 0 = undefined
+ LoginId int64 `json:"loginId"` // login ID to affect, 0 = undefined
- // file open request for fat client
- FileRequestedAttributeId uuid.UUID
- FileRequestedChooseApp bool
- FileRequestedFileId uuid.UUID
- FileRequestedFileHash string
- FileRequestedFileName string
+ // preferred filters, prioritize target if it matches filter, otherwise send it to others
+ PwaModuleIdPreferred uuid.UUID `json:"pwaModuleIdPreferred"` // client connecting via PWA sub host (direct app access), nil UUID = undefined
+}
- // file copy request
- FilesCopiedAttributeId uuid.UUID
- FilesCopiedFileIds []uuid.UUID
- FilesCopiedRecordId int64
+// cluster event to be processed by nodes and, in most cases, to be distributed to clients of cluster nodes
+type ClusterEvent struct {
+ Content string `json:"content"` // collectionChanged, configChanged, kick, kickNoAdmin, renew, schemaLoading, schemaLoaded, ...
+ Payload interface{} `json:"payload"` // content dependent payload
+ Target ClusterEventTarget `json:"target"` // target filter, to which clients this event is to be sent
}
diff --git a/types/types_config.go b/types/types_config.go
index abab2e06..843db014 100644
--- a/types/types_config.go
+++ b/types/types_config.go
@@ -1,5 +1,11 @@
package types
+type Version struct {
+ Build int // build number of version (1023)
+ Cut string // major+minor version (1.2), should match DB version (1.2), which is kept to the same major+minor as app
+ Full string // full version (1.2.0.1023), syntax: major.minor.patch.build
+}
+
type FileType struct {
Cluster struct {
NodeId string `json:"nodeId"`
@@ -7,6 +13,10 @@ type FileType struct {
Db FileTypeDb `json:"db"`
+ // mirror mode, eg. system mirrors other, likely productive instance
+ // disables write connectors (currently: email retrieve/send, REST call) & backups
+ Mirror bool `json:"mirror"`
+
Paths struct {
Certificates string `json:"certificates"`
EmbeddedDbBin string `json:"embeddedDbBin"`
@@ -19,10 +29,11 @@ type FileType struct {
Portable bool `json:"portable"`
Web struct {
- Cert string `json:"cert"`
- Key string `json:"key"`
- Listen string `json:"listen"`
- Port int `json:"port"`
+ Cert string `json:"cert"`
+ Key string `json:"key"`
+ Listen string `json:"listen"`
+ Port int `json:"port"`
+ TlsMinVersion string `json:"tlsMinVersion"`
} `json:"web"`
}
@@ -39,4 +50,8 @@ type FileTypeDb struct {
// SSL/TLS settings
Ssl bool `json:"ssl"`
SslSkipVerify bool `json:"sslSkipVerify"`
+
+ // connection settings
+ ConnsMax int32 `json:"connsMax"` // ignore if 0
+ ConnsMin int32 `json:"connsMin"` // ignore if 0
}
diff --git a/types/types_data.go b/types/types_data.go
index aab60366..ca3b37d8 100644
--- a/types/types_data.go
+++ b/types/types_data.go
@@ -15,9 +15,11 @@ import (
// to build complex filters, multiple clauses can be connected by AND|OR
// if attributes are used, the index of which relation the attribute belongs to, is required
// if sub queries are used, the nesting level needs to be specified (0 = main query, 1 = 1st sub query)
-// this is required as a sub query from the same relation might refer to itself or to a parent query with similar relations/attributes
+//
+// this is required as a sub query from the same relation might refer to itself or to a parent query with similar relations/attributes
type DataGetFilter struct {
Connector string `json:"connector"` // clause connector (AND|OR), first clause is always AND
+ Index int `json:"index"` // relation index to apply filter to (0 = filter query, 1+ = filter relation join)
Operator string `json:"operator"` // operator (=, <, >, ...)
Side0 DataGetFilterSide `json:"side0"` // comparison: left side
Side1 DataGetFilterSide `json:"side1"` // comparison: right side
@@ -27,6 +29,7 @@ type DataGetFilterSide struct {
AttributeIndex int `json:"attributeIndex"` // attribute relation index
AttributeNested int `json:"attributeNested"` // attribute nesting level (0 = main query, 1 = 1st sub query)
Brackets int `json:"brackets"` // brackets before (side0) or after (side1)
+ FtsDict pgtype.Text `json:"ftsDict"` // dictionary for full text search, execute tsquery on value and convert attribute side to tsvector if set
Query DataGet `json:"query"` // sub query, optional
QueryAggregator pgtype.Text `json:"queryAggregator"` // sub query aggregator, optional
Value interface{} `json:"value"` // fixed value, optional, filled by frontend with value of field/login ID/record/...
@@ -34,7 +37,9 @@ type DataGetFilterSide struct {
// a JOIN connects multiple relations via a relationship attribute
// the join index is a unique number for each relation
-// this is required as the same relation can be joined multiple times or even be self-joined
+//
+// this is required as the same relation can be joined multiple times or even be self-joined
+//
// index from is used to ascertain the join chain until the first relation (usually index=0)
type DataGetJoin struct {
AttributeId uuid.UUID `json:"attributeId"` // relationship attribute ID
@@ -83,6 +88,7 @@ type DataGet struct {
Limit int `json:"limit"` // result limit
Offset int `json:"offset"` // result offset
GetPerm bool `json:"getPerm"` // get result permissions (SET/DEL) from relation policy, GET is ignored as results are filtered by it already
+ SearchDicts []string `json:"searchDicts"` // list of fulltext search dictionaries (english, german, ...)
}
type DataGetResult struct {
IndexRecordIds map[int]interface{} `json:"indexRecordIds"` // IDs of relation records, key: relation index
diff --git a/types/types_license.go b/types/types_license.go
index 2e41fc82..4116d368 100644
--- a/types/types_license.go
+++ b/types/types_license.go
@@ -1,10 +1,12 @@
package types
type License struct {
- LicenseId string `json:"licenseId"`
- ClientId string `json:"clientId"`
- RegisteredFor string `json:"registeredFor"`
- ValidUntil int64 `json:"validUntil"`
+ LicenseId string `json:"licenseId"`
+ ClientId string `json:"clientId"`
+ Extensions []string `json:"extensions"`
+ LoginCount int64 `json:"loginCount"`
+ RegisteredFor string `json:"registeredFor"`
+ ValidUntil int64 `json:"validUntil"`
}
type LicenseFile struct {
diff --git a/types/types_login.go b/types/types_login.go
index e4822d23..733f7eb1 100644
--- a/types/types_login.go
+++ b/types/types_login.go
@@ -1,18 +1,57 @@
package types
-import "github.com/gofrs/uuid"
+import (
+ "github.com/gofrs/uuid"
+ "github.com/jackc/pgx/v5/pgtype"
+)
type Login struct {
Id int64 `json:"id"`
Name string `json:"name"`
}
type LoginAccess struct {
- RoleIds []uuid.UUID `json:"roleIds"` // all assigned roles (incl. inherited)
- Api map[uuid.UUID]int `json:"api"` // effective access to specific API
- Attribute map[uuid.UUID]int `json:"attribute"` // effective access to specific attributes
- Collection map[uuid.UUID]int `json:"collection"` // effective access to specific collection
- Menu map[uuid.UUID]int `json:"menu"` // effective access to specific menus
- Relation map[uuid.UUID]int `json:"relation"` // effective access to specific relations
+ RoleIds []uuid.UUID `json:"roleIds"` // all assigned roles (incl. inherited)
+ Api map[uuid.UUID]int `json:"api"` // effective access to specific API
+ Attribute map[uuid.UUID]int `json:"attribute"` // effective access to specific attributes
+ ClientEvent map[uuid.UUID]int `json:"clientEvent"` // effective access to specific client events
+ Collection map[uuid.UUID]int `json:"collection"` // effective access to specific collection
+ Menu map[uuid.UUID]int `json:"menu"` // effective access to specific menus
+ Relation map[uuid.UUID]int `json:"relation"` // effective access to specific relations
+ Widget map[uuid.UUID]int `json:"widget"` // effective access to specific widgets
+}
+type LoginClientEvent struct {
+ // login client events exist if a login has enabled a hotkey client event
+ HotkeyChar string `json:"hotkeyChar"`
+ HotkeyModifier1 string `json:"hotkeyModifier1"`
+ HotkeyModifier2 pgtype.Text `json:"hotkeyModifier2"`
+}
+type LoginFavorite struct {
+ Id uuid.UUID `json:"id"`
+ FormId uuid.UUID `json:"formId"` // ID of form to show
+ RecordId pgtype.Int8 `json:"recordId"` // ID of record to open, NULL if no record to open
+ Title pgtype.Text `json:"title"` // user defined title of favorite, empty if not set
+}
+type LoginMeta struct {
+ Department string `json:"department"`
+ Email string `json:"email"`
+ Location string `json:"location"`
+ Notes string `json:"notes"`
+ Organization string `json:"organization"`
+ PhoneFax string `json:"phoneFax"`
+ PhoneLandline string `json:"phoneLandline"`
+ PhoneMobile string `json:"phoneMobile"`
+ NameDisplay string `json:"nameDisplay"`
+ NameFore string `json:"nameFore"`
+ NameSur string `json:"nameSur"`
+}
+type LoginMfaToken struct {
+ Id int64 `json:"id"`
+ Name string `json:"name"`
+}
+type LoginOptions struct {
+ FavoriteId pgtype.UUID `json:"favoriteId"` // NOT NULL if options are valid in context of a favorite form
+ FieldId uuid.UUID `json:"fieldId"`
+ Options string `json:"options"`
}
type LoginPublicKey struct {
LoginId int64 `json:"loginId"` // ID of login
@@ -30,7 +69,12 @@ type LoginTokenFixed struct {
Token string `json:"token"`
DateCreate int64 `json:"dateCreate"`
}
-type LoginMfaToken struct {
- Id int64 `json:"id"`
- Name string `json:"name"`
+type LoginWidgetGroupItem struct {
+ WidgetId pgtype.UUID `json:"widgetId"` // ID of a module widget, empty if system widget is used
+ ModuleId pgtype.UUID `json:"moduleId"` // ID of a module, if relevant for widget (systemModuleMenu)
+ Content string `json:"content"` // content of widget (moduleWidget, systemModuleMenu)
+}
+type LoginWidgetGroup struct {
+ Title string `json:"title"`
+ Items []LoginWidgetGroupItem `json:"items"`
}
diff --git a/types/types_login_setting.go b/types/types_login_setting.go
new file mode 100644
index 00000000..861ad902
--- /dev/null
+++ b/types/types_login_setting.go
@@ -0,0 +1,34 @@
+package types
+
+import "github.com/jackc/pgx/v5/pgtype"
+
+type Settings struct {
+ BoolAsIcon bool `json:"boolAsIcon"`
+ BordersSquared bool `json:"bordersSquared"`
+ ColorClassicMode bool `json:"colorClassicMode"`
+ ColorHeader pgtype.Text `json:"colorHeader"`
+ ColorHeaderSingle bool `json:"colorHeaderSingle"`
+ ColorMenu pgtype.Text `json:"colorMenu"`
+ DateFormat string `json:"dateFormat"`
+ Dark bool `json:"dark"`
+ FontFamily string `json:"fontFamily"`
+ FontSize int `json:"fontSize"`
+ FormActionsAlign string `json:"formActionsAlign"`
+ HeaderCaptions bool `json:"headerCaptions"`
+ HeaderModules bool `json:"headerModules"`
+ HintUpdateVersion int `json:"hintUpdateVersion"`
+ LanguageCode string `json:"languageCode"`
+ ListColored bool `json:"listColored"`
+ ListSpaced bool `json:"listSpaced"`
+ MobileScrollForm bool `json:"mobileScrollForm"`
+ NumberSepDecimal string `json:"numberSepDecimal"`
+ NumberSepThousand string `json:"numberSepThousand"`
+ PageLimit int `json:"pageLimit"`
+ Pattern pgtype.Text `json:"pattern"`
+ SearchDictionaries []string `json:"searchDictionaries"`
+ ShadowsInputs bool `json:"shadowsInputs"`
+ Spacing int `json:"spacing"`
+ SundayFirstDow bool `json:"sundayFirstDow"`
+ TabRemember bool `json:"tabRemember"`
+ WarnUnsaved bool `json:"warnUnsaved"`
+}
diff --git a/types/types_mail.go b/types/types_mail.go
index 201b8ca7..cd23323e 100644
--- a/types/types_mail.go
+++ b/types/types_mail.go
@@ -24,15 +24,18 @@ type Mail struct {
AttributeId pgtype.UUID `json:"attributeId"` // file attribute to update/get attachment of/from
}
type MailAccount struct {
- Id int32 `json:"id"`
- Name string `json:"name"`
- Mode string `json:"mode"`
- Username string `json:"username"`
- Password string `json:"password"`
- StartTls bool `json:"startTls"`
- SendAs string `json:"sendAs"`
- HostName string `json:"hostName"`
- HostPort int64 `json:"hostPort"`
+ Id int32 `json:"id"`
+ Name string `json:"name"`
+ Mode string `json:"mode"` // smtp/imap
+ AuthMethod string `json:"authMethod"` // plain/login/XOAUTH2 (login is used in O365 legacy SMTP authentication)
+ Username string `json:"username"`
+ Password string `json:"password"`
+ StartTls bool `json:"startTls"`
+ SendAs string `json:"sendAs"`
+ HostName string `json:"hostName"`
+ HostPort int64 `json:"hostPort"`
+ OauthClientId pgtype.Int4 `json:"oauthClientId"` // oauth client, if authmethod XOAUTH2 is used
+ Comment pgtype.Text `json:"comment"`
}
type MailFile struct {
Id uuid.UUID `json:"id"`
@@ -41,3 +44,14 @@ type MailFile struct {
Name string `json:"name"`
Size int64 `json:"size"`
}
+type MailTraffic struct {
+ FromList string `json:"fromList"`
+ ToList string `json:"toList"`
+ CcList string `json:"ccList"`
+ BccList string `json:"bccList"`
+ Subject string `json:"subject"`
+ Date int64 `json:"date"`
+ Files []string `json:"files"`
+ Outgoing bool `json:"outgoing"`
+ AccountId pgtype.Int4 `json:"accountId"`
+}
diff --git a/types/types_moduleMeta.go b/types/types_moduleMeta.go
new file mode 100644
index 00000000..360f6f9f
--- /dev/null
+++ b/types/types_moduleMeta.go
@@ -0,0 +1,12 @@
+package types
+
+import "github.com/gofrs/uuid"
+
+type ModuleMeta struct {
+ Id uuid.UUID `json:"id"`
+ Hidden bool `json:"hidden"`
+ Owner bool `json:"owner"`
+ Position int `json:"position"`
+ DateChange int64 `json:"dateChange"`
+ LanguagesCustom []string `json:"languagesCustom"`
+}
diff --git a/types/types_moduleOption.go b/types/types_moduleOption.go
deleted file mode 100644
index 1a350671..00000000
--- a/types/types_moduleOption.go
+++ /dev/null
@@ -1,10 +0,0 @@
-package types
-
-import "github.com/gofrs/uuid"
-
-type ModuleOption struct {
- Id uuid.UUID `json:"id"`
- Hidden bool `json:"hidden"`
- Owner bool `json:"owner"`
- Position int `json:"position"`
-}
diff --git a/types/types_repo.go b/types/types_repo.go
index 685c82b5..8d8db835 100644
--- a/types/types_repo.go
+++ b/types/types_repo.go
@@ -6,16 +6,21 @@ import (
)
type RepoModule struct {
- ModuleId uuid.UUID `json:"moduleId"`
- FileId uuid.UUID `json:"fileId"`
- Name string `json:"name"`
- ChangeLog pgtype.Text `json:"changeLog"`
- Author string `json:"author"`
- InStore bool `json:"inStore"`
- ReleaseBuild int `json:"releaseBuild"` // module version
- ReleaseBuildApp int `json:"releaseBuildApp"` // platform version
- ReleaseDate int `json:"releaseDate"` // module release date
- LanguageCodeMeta map[string]RepoModuleMeta `json:"languageCodeMeta"`
+ // module meta data
+ ModuleId uuid.UUID `json:"moduleId"`
+ Name string `json:"name"`
+ ChangeLog pgtype.Text `json:"changeLog"`
+ Author string `json:"author"`
+ InStore bool `json:"inStore"`
+
+ // meta data of latest module release
+ FileId uuid.UUID `json:"fileId"`
+ ReleaseBuild int `json:"releaseBuild"` // module version
+ ReleaseBuildApp int `json:"releaseBuildApp"` // platform version
+ ReleaseDate int64 `json:"releaseDate"`
+
+ // translated meta data
+ LanguageCodeMeta map[string]RepoModuleMeta `json:"languageCodeMeta"` // key = language code (en_us, de_de, ...)
}
type RepoModuleMeta struct {
diff --git a/types/types_schema.go b/types/types_schema.go
index 4bab0c3a..0e866bba 100644
--- a/types/types_schema.go
+++ b/types/types_schema.go
@@ -8,33 +8,46 @@ import (
)
type Module struct {
- Id uuid.UUID `json:"id"`
- ParentId pgtype.UUID `json:"parentId"` // module parent ID
- FormId pgtype.UUID `json:"formId"` // default start form
- IconId pgtype.UUID `json:"iconId"` // module icon in header/menu
- Name string `json:"name"` // name of module, is used for DB schema
- Color1 string `json:"color1"` // primary module color (used for header)
- Position int `json:"position"` // position of module in nav. contexts (home, header)
- LanguageMain string `json:"languageMain"` // language code of main language (for fallback)
- ReleaseBuild int `json:"releaseBuild"` // build of this module, incremented with each release
- ReleaseBuildApp int `json:"releaseBuildApp"` // build of app at last release
- ReleaseDate int64 `json:"releaseDate"` // date of last release
- DependsOn []uuid.UUID `json:"dependsOn"` // modules that this module is dependent on
- StartForms []ModuleStartForm `json:"startForms"` // start forms, assigned via role membership
- Languages []string `json:"languages"` // language codes that this module supports
- Relations []Relation `json:"relations"`
- Forms []Form `json:"forms"`
- Menus []Menu `json:"menus"`
- Icons []Icon `json:"icons"`
- Roles []Role `json:"roles"`
- Articles []Article `json:"articles"`
- LoginForms []LoginForm `json:"loginForms"`
- PgFunctions []PgFunction `json:"pgFunctions"`
- JsFunctions []JsFunction `json:"jsFunctions"`
- Collections []Collection `json:"collections"`
- Apis []Api `json:"apis"`
- ArticleIdsHelp []uuid.UUID `json:"articleIdsHelp"` // IDs of articles for primary module help, in order
- Captions CaptionMap `json:"captions"`
+ Id uuid.UUID `json:"id"`
+ ParentId pgtype.UUID `json:"parentId"` // module parent ID
+ FormId pgtype.UUID `json:"formId"` // default start form
+ IconId pgtype.UUID `json:"iconId"` // module icon in header/menu
+ IconIdPwa1 pgtype.UUID `json:"iconIdPwa1"` // PWA icon, 192x192
+ IconIdPwa2 pgtype.UUID `json:"iconIdPwa2"` // PWA icon, 512x512
+ JsFunctionIdOnLogin pgtype.UUID `json:"jsFunctionIdOnLogin"` // frontend function called when login happens in frontend
+ PgFunctionIdLoginSync pgtype.UUID `json:"pgFunctionIdLoginSync"` // backend function called when login meta data changes
+ Name string `json:"name"` // name of module, is used for DB schema
+ NamePwa pgtype.Text `json:"namePwa"` // name of module shown for PWA
+ NamePwaShort pgtype.Text `json:"namePwaShort"` // name of module shown for PWA, short version
+ Color1 pgtype.Text `json:"color1"` // primary module color (used for header)
+ Position int `json:"position"` // position of module in nav. contexts (home, header)
+ LanguageMain string `json:"languageMain"` // language code of main language (for fallback)
+ ReleaseBuild int `json:"releaseBuild"` // build of this module, incremented with each release
+ ReleaseBuildApp int `json:"releaseBuildApp"` // build of app at last release
+ ReleaseDate int64 `json:"releaseDate"` // date of last release
+ DependsOn []uuid.UUID `json:"dependsOn"` // modules that this module is dependent on
+ StartForms []ModuleStartForm `json:"startForms"` // start forms, assigned via role membership
+ Languages []string `json:"languages"` // language codes that this module supports
+ Relations []Relation `json:"relations"`
+ Forms []Form `json:"forms"`
+ MenuTabs []MenuTab `json:"menuTabs"`
+ Icons []Icon `json:"icons"`
+ Roles []Role `json:"roles"`
+ Articles []Article `json:"articles"`
+ LoginForms []LoginForm `json:"loginForms"`
+ PgFunctions []PgFunction `json:"pgFunctions"`
+ PgTriggers []PgTrigger `json:"pgTriggers"`
+ JsFunctions []JsFunction `json:"jsFunctions"`
+ Collections []Collection `json:"collections"`
+ Apis []Api `json:"apis"`
+ ClientEvents []ClientEvent `json:"clientEvents"`
+ Variables []Variable `json:"variables"`
+ Widgets []Widget `json:"widgets"`
+ ArticleIdsHelp []uuid.UUID `json:"articleIdsHelp"` // IDs of articles for primary module help, in order
+ Captions CaptionMap `json:"captions"`
+
+ // legacy
+ Menus []Menu `json:"menus"`
}
type ModuleStartForm struct {
Position int `json:"position"`
@@ -75,7 +88,9 @@ type Relation struct {
Indexes []PgIndex `json:"indexes"` // read only, all relation indexes
Policies []RelationPolicy `json:"policies"` // read only, all relation policies
Presets []Preset `json:"presets"` // read only, all relation presets
- Triggers []PgTrigger `json:"triggers"` // read only, all relation triggers
+
+ // legacy
+ Triggers []PgTrigger `json:"triggers"` // moved to module pgTriggers
}
type RelationPolicy struct {
RoleId uuid.UUID `json:"roleId"`
@@ -98,7 +113,7 @@ type PresetValue struct {
PresetIdRefer pgtype.UUID `json:"presetIdRefer"`
AttributeId uuid.UUID `json:"attributeId"`
Protected bool `json:"protected"`
- Value string `json:"value"`
+ Value pgtype.Text `json:"value"`
}
type Attribute struct {
Id uuid.UUID `json:"id"`
@@ -108,7 +123,8 @@ type Attribute struct {
Name string `json:"name"` // name, used as table column
Content string `json:"content"` // content (integer, varchar, text, real, uuid, files, n:1, ...)
ContentUse string `json:"contentUse"` // content use (default, richtext, color, datetime, ...)
- Length int `json:"length"` // varchar length or max file size in KB (files attribute)
+ Length int `json:"length"` // numeric precision (digits number + fractions) / varchar length / max file size in KB
+ LengthFract int `json:"lengthFract"` // numeric scale (digits fractions)
Nullable bool `json:"nullable"` // value is nullable
Encrypted bool `json:"encrypted"` // value is encrypted (end-to-end for logins)
Def string `json:"def"` // default value
@@ -118,14 +134,21 @@ type Attribute struct {
}
type Menu struct {
Id uuid.UUID `json:"id"`
- ModuleId uuid.UUID `json:"moduleId"`
FormId pgtype.UUID `json:"formId"`
IconId pgtype.UUID `json:"iconId"`
Menus []Menu `json:"menus"`
+ Color pgtype.Text `json:"color"`
ShowChildren bool `json:"showChildren"`
Collections []CollectionConsumer `json:"collections"` // collection values to display on menu entry
Captions CaptionMap `json:"captions"`
}
+type MenuTab struct {
+ Id uuid.UUID `json:"id"`
+ ModuleId uuid.UUID `json:"moduleId"`
+ IconId pgtype.UUID `json:"iconId"`
+ Menus []Menu `json:"menus"`
+ Captions CaptionMap `json:"captions"`
+}
type LoginForm struct {
Id uuid.UUID `json:"id"`
ModuleId uuid.UUID `json:"moduleId"`
@@ -136,12 +159,22 @@ type LoginForm struct {
Captions CaptionMap `json:"captions"`
}
type OpenForm struct {
- FormIdOpen uuid.UUID `json:"formIdOpen"` // form to open
- AttributeIdApply pgtype.UUID `json:"attributeIdApply"` // apply record ID to attribute on opened form
- RelationIndex int `json:"relationIndex"` // relation index of record to apply to attribute
- PopUp bool `json:"popUp"` // opened form is pop-up-form
- MaxHeight int `json:"maxHeight"` // max. height in PX for opened form (pop-up only)
- MaxWidth int `json:"maxWidth"` // max. width in PX for opened form (pop-up only)
+ PopUpType pgtype.Text `json:"popUpType"` // if set, form is opened as pop-up, values: float, inline
+ Context pgtype.Text `json:"context"` // used when same entity needs multiple open forms, values: bulk
+ MaxHeight int `json:"maxHeight"` // max. height in PX for opened form (pop-up only)
+ MaxWidth int `json:"maxWidth"` // max. width in PX for opened form (pop-up only)
+
+ // open form
+ RelationIndexOpen int `json:"relationIndexOpen"` // relation index of record to open
+ FormIdOpen uuid.UUID `json:"formIdOpen"` // form to open record in (must have chosen relation as base relation)
+
+ // apply record from current form as relationship value on target form
+ RelationIndexApply int `json:"relationIndexApply"` // relation index of record to use as relationship value
+ AttributeIdApply pgtype.UUID `json:"attributeIdApply"` // apply record ID as relationship value to attribute on opened form
+
+ // legacy
+ RelationIndex int `json:"relationIndex"` // replaced by relationIndexApply
+ PopUp bool `json:"popUp"` // replaced by popUpType
}
type Icon struct {
Id uuid.UUID `json:"id"`
@@ -153,16 +186,26 @@ type Form struct {
Id uuid.UUID `json:"id"`
ModuleId uuid.UUID `json:"moduleId"`
PresetIdOpen pgtype.UUID `json:"presetIdOpen"`
+ FieldIdFocus pgtype.UUID `json:"fieldIdFocus"` // field to set focus to on form load
IconId pgtype.UUID `json:"iconId"`
Name string `json:"name"`
NoDataActions bool `json:"noDataActions"` // disables record manipulation actions (new/save/delete)
Query Query `json:"query"`
Fields []interface{} `json:"fields"`
+ Actions []FormAction `json:"actions"`
Functions []FormFunction `json:"functions"`
States []FormState `json:"states"`
ArticleIdsHelp []uuid.UUID `json:"articleIdsHelp"` // IDs of articles for form context help, in order
Captions CaptionMap `json:"captions"`
}
+type FormAction struct {
+ Id uuid.UUID `json:"id"`
+ JsFunctionId uuid.UUID `json:"jsFunctionId"`
+ IconId pgtype.UUID `json:"iconId"`
+ Color pgtype.Text `json:"color"`
+ State string `json:"state"` // default state (hidden, default, readonly)
+ Captions CaptionMap `json:"captions"`
+}
type FormFunction struct {
Position int `json:"position"`
JsFunctionId uuid.UUID `json:"jsFunctionId"`
@@ -181,40 +224,33 @@ type FormStateCondition struct {
Operator string `json:"operator"` // comparison operator (=, <>, etc.)
Side0 FormStateConditionSide `json:"side0"` // comparison: left side
Side1 FormStateConditionSide `json:"side1"` // comparison: right side
-
- // legacy, replaced by FormStateConditionSide
- Brackets0 int `json:"brackets0"`
- Brackets1 int `json:"brackets1"`
- FieldId0 pgtype.UUID `json:"fieldId0"` // if set: field0 value for match (not required for: newRecord, roleId)
- FieldId1 pgtype.UUID `json:"fieldId1"` // if set: field0 value must match field1 value
- PresetId1 pgtype.UUID `json:"presetId1"` // if set: field0 value must match preset record value
- RoleId pgtype.UUID `json:"roleId"` // if set: with operator '=' login must have role ('<>' must not have role)
- FieldChanged pgtype.Bool `json:"fieldChanged"` // if set: true matches field value changed, false matches unchanged
- NewRecord pgtype.Bool `json:"newRecord"` // if set: true matches new, false existing record
- Login1 pgtype.Bool `json:"login1"` // if set: true matches login ID of current user
- Value1 pgtype.Text `json:"value1"` // fixed value for direct field0 match
}
type FormStateConditionSide struct {
Brackets int `json:"brackets"` // opening/closing brackets (side 0/1)
- Content string `json:"content"` // collection, field, fieldChanged, fieldValid, login, preset, recordNew, role, true, value
+ Content string `json:"content"` // collection, field, fieldChanged, fieldValid, formChanged, formState, login, preset, recordNew, role, true, value, variable
CollectionId pgtype.UUID `json:"collectionId"` // collection ID of which column value to compare
ColumnId pgtype.UUID `json:"columnId"` // column ID from collection of which value to compare
- FieldId pgtype.UUID `json:"fieldId"` // field for value/has changed?
+ FieldId pgtype.UUID `json:"fieldId"` // field ID, for checks: value / has changed / is valid
+ FormStateId pgtype.UUID `json:"formStateId"` // form state ID, for taking result of other form state as condition
PresetId pgtype.UUID `json:"presetId"` // preset ID of record to be compared
RoleId pgtype.UUID `json:"roleId"` // role ID assigned to user
+ VariableId pgtype.UUID `json:"variableId"` // variable ID of value to retrieve
Value pgtype.Text `json:"value"` // fixed value, can be anything including NULL
}
type FormStateEffect struct {
- FieldId pgtype.UUID `json:"fieldId"` // affected field
- TabId pgtype.UUID `json:"tabId"` // affected tab
- NewState string `json:"newState"` // effect state (hidden, readonly, default, required)
+ FormActionId pgtype.UUID `json:"formActionId"` // affected form action
+ FieldId pgtype.UUID `json:"fieldId"` // affected field
+ TabId pgtype.UUID `json:"tabId"` // affected tab
+ NewData int32 `json:"newData"` // defines data handling via number (CREATE=4, UPDATE=2, DELETE=1, NOTHING=0) for form or data fields (lists, calendars, kanban, etc.)
+ NewState string `json:"newState"` // applied state (hidden, default, readonly, optional, required)
}
type Field struct {
Id uuid.UUID `json:"id"`
TabId pgtype.UUID `json:"tabId"`
IconId pgtype.UUID `json:"iconId"`
- Content string `json:"content"` // field content (button, header, data, list, calendar, chart, tabs)
- State string `json:"state"` // field default state (hidden, readonly, default, required)
+ Content string `json:"content"` // content (button, header, data, list, calendar, chart, tabs)
+ State string `json:"state"` // default state (hidden, default, readonly, optional, required)
+ Flags []string `json:"flags"` // flags for field display/behaviour options (clipboard, monospace, alignEnd, ...)
OnMobile bool `json:"onMobile"` // display this field on mobile?
}
type FieldButton struct {
@@ -223,14 +259,11 @@ type FieldButton struct {
IconId pgtype.UUID `json:"iconId"`
Content string `json:"content"`
State string `json:"state"`
+ Flags []string `json:"flags"`
OnMobile bool `json:"onMobile"`
JsFunctionId pgtype.UUID `json:"jsFunctionId"` // JS function to executing when triggering button
OpenForm OpenForm `json:"openForm"`
Captions CaptionMap `json:"captions"`
-
- // legacy
- AttributeIdRecord pgtype.UUID `json:"attributeIdRecord"`
- FormIdOpen pgtype.UUID `json:"formIdOpen"`
}
type FieldCalendar struct {
Id uuid.UUID `json:"id"`
@@ -238,6 +271,7 @@ type FieldCalendar struct {
IconId pgtype.UUID `json:"iconId"`
Content string `json:"content"`
State string `json:"state"`
+ Flags []string `json:"flags"`
OnMobile bool `json:"onMobile"`
AttributeIdDate0 uuid.UUID `json:"attributeIdDate0"`
AttributeIdDate1 uuid.UUID `json:"attributeIdDate1"`
@@ -251,14 +285,12 @@ type FieldCalendar struct {
Ics bool `json:"ics"` // calendar available as ICS download
DateRange0 int64 `json:"dateRange0"` // ICS/gantt time range before NOW (seconds)
DateRange1 int64 `json:"dateRange1"` // ICS/gantt time range after NOW (seconds)
+ Days int `json:"days"` // how many days to show on calendar by default (1,3,5,7,42)
+ DaysToggle bool `json:"daysToggle"` // if enabled, user can choose how many days to show
OpenForm OpenForm `json:"openForm"`
Columns []Column `json:"columns"`
Collections []CollectionConsumer `json:"collections"` // collections to select values for query filters
Query Query `json:"query"`
-
- // legacy
- AttributeIdRecord pgtype.UUID `json:"attributeIdRecord"`
- FormIdOpen pgtype.UUID `json:"formIdOpen"`
}
type FieldChart struct {
Id uuid.UUID `json:"id"`
@@ -266,10 +298,12 @@ type FieldChart struct {
IconId pgtype.UUID `json:"iconId"`
Content string `json:"content"`
State string `json:"state"`
+ Flags []string `json:"flags"`
OnMobile bool `json:"onMobile"`
ChartOption string `json:"chartOption"`
Columns []Column `json:"columns"`
Query Query `json:"query"`
+ Captions CaptionMap `json:"captions"`
}
type FieldContainer struct {
Id uuid.UUID `json:"id"`
@@ -277,6 +311,7 @@ type FieldContainer struct {
IconId pgtype.UUID `json:"iconId"`
Content string `json:"content"`
State string `json:"state"`
+ Flags []string `json:"flags"`
OnMobile bool `json:"onMobile"`
Fields []interface{} `json:"fields"`
Direction string `json:"direction"`
@@ -296,6 +331,7 @@ type FieldData struct {
IconId pgtype.UUID `json:"iconId"`
Content string `json:"content"`
State string `json:"state"`
+ Flags []string `json:"flags"`
OnMobile bool `json:"onMobile"`
Clipboard bool `json:"clipboard"` // enable copy-to-clipboard action
AttributeId uuid.UUID `json:"attributeId"` // data attribute
@@ -320,6 +356,7 @@ type FieldDataRelationship struct {
IconId pgtype.UUID `json:"iconId"`
Content string `json:"content"`
State string `json:"state"`
+ Flags []string `json:"flags"`
OnMobile bool `json:"onMobile"`
Clipboard bool `json:"clipboard"`
AttributeId uuid.UUID `json:"attributeId"`
@@ -346,10 +383,8 @@ type FieldDataRelationship struct {
Captions CaptionMap `json:"captions"`
// legacy
- AttributeIdRecord pgtype.UUID `json:"attributeIdRecord"`
- FormIdOpen pgtype.UUID `json:"formIdOpen"`
- CollectionIdDef pgtype.UUID `json:"collectionIdDef"`
- ColumnIdDef pgtype.UUID `json:"columnIdDef"`
+ CollectionIdDef pgtype.UUID `json:"collectionIdDef"`
+ ColumnIdDef pgtype.UUID `json:"columnIdDef"`
}
type FieldHeader struct {
Id uuid.UUID `json:"id"`
@@ -357,31 +392,49 @@ type FieldHeader struct {
IconId pgtype.UUID `json:"iconId"`
Content string `json:"content"`
State string `json:"state"`
+ Flags []string `json:"flags"`
OnMobile bool `json:"onMobile"`
+ Richtext bool `json:"richtext"`
Size int `json:"size"`
Captions CaptionMap `json:"captions"`
}
+type FieldKanban struct {
+ Id uuid.UUID `json:"id"`
+ TabId pgtype.UUID `json:"tabId"`
+ IconId pgtype.UUID `json:"iconId"`
+ Content string `json:"content"`
+ State string `json:"state"`
+ Flags []string `json:"flags"`
+ OnMobile bool `json:"onMobile"`
+ RelationIndexData int `json:"relationIndexData"`
+ RelationIndexAxisX int `json:"relationIndexAxisX"`
+ RelationIndexAxisY pgtype.Int2 `json:"relationIndexAxisY"`
+ AttributeIdSort pgtype.UUID `json:"attributeIdSort"`
+ Columns []Column `json:"columns"`
+ Collections []CollectionConsumer `json:"collections"` // collections to select values for query filters
+ OpenForm OpenForm `json:"openForm"`
+ Query Query `json:"query"`
+}
type FieldList struct {
- Id uuid.UUID `json:"id"`
- TabId pgtype.UUID `json:"tabId"`
- IconId pgtype.UUID `json:"iconId"`
- Content string `json:"content"`
- State string `json:"state"`
- OnMobile bool `json:"onMobile"`
- CsvExport bool `json:"csvExport"`
- CsvImport bool `json:"csvImport"`
- AutoRenew pgtype.Int4 `json:"autoRenew"` // automatic list refresh
- Layout string `json:"layout"` // list layout: table, cards
- FilterQuick bool `json:"filterQuick"` // enable quickfilter (uses all visible columns)
- ResultLimit int `json:"resultLimit"` // predefined limit, overwritable by user
- Columns []Column `json:"columns"`
- Collections []CollectionConsumer `json:"collections"` // collections to select values for query filters
- OpenForm OpenForm `json:"openForm"`
- Query Query `json:"query"`
-
- // legacy
- AttributeIdRecord pgtype.UUID `json:"attributeIdRecord"`
- FormIdOpen pgtype.UUID `json:"formIdOpen"`
+ Id uuid.UUID `json:"id"`
+ TabId pgtype.UUID `json:"tabId"`
+ IconId pgtype.UUID `json:"iconId"`
+ Content string `json:"content"`
+ State string `json:"state"`
+ Flags []string `json:"flags"`
+ OnMobile bool `json:"onMobile"`
+ CsvExport bool `json:"csvExport"`
+ CsvImport bool `json:"csvImport"`
+ AutoRenew pgtype.Int4 `json:"autoRenew"` // automatic list refresh
+ Layout string `json:"layout"` // list layout: table, cards
+ FilterQuick bool `json:"filterQuick"` // enable quickfilter (uses all visible columns)
+ ResultLimit int `json:"resultLimit"` // predefined limit, overwritable by user
+ Columns []Column `json:"columns"`
+ Collections []CollectionConsumer `json:"collections"` // collections to select values for query filters
+ OpenForm OpenForm `json:"openForm"` // regular form to open records with
+ OpenFormBulk OpenForm `json:"openFormBulk"` // form for bulk actions (multiple record updates)
+ Query Query `json:"query"`
+ Captions CaptionMap `json:"captions"`
}
type FieldTabs struct {
Id uuid.UUID `json:"id"`
@@ -389,9 +442,25 @@ type FieldTabs struct {
IconId pgtype.UUID `json:"iconId"`
Content string `json:"content"`
State string `json:"state"`
+ Flags []string `json:"flags"`
OnMobile bool `json:"onMobile"`
+ Captions CaptionMap `json:"captions"`
Tabs []Tab `json:"tabs"`
}
+type FieldVariable struct {
+ Id uuid.UUID `json:"id"`
+ VariableId pgtype.UUID `json:"variableId"`
+ JsFunctionId pgtype.UUID `json:"jsFunctionId"`
+ IconId pgtype.UUID `json:"iconId"`
+ Content string `json:"content"`
+ State string `json:"state"`
+ Flags []string `json:"flags"`
+ OnMobile bool `json:"onMobile"`
+ Clipboard bool `json:"clipboard"`
+ Columns []Column `json:"columns"`
+ Query Query `json:"query"`
+ Captions CaptionMap `json:"captions"`
+}
type Collection struct {
Id uuid.UUID `json:"id"`
ModuleId uuid.UUID `json:"moduleId"`
@@ -404,43 +473,57 @@ type Collection struct {
type CollectionConsumer struct {
Id uuid.UUID `json:"id"`
CollectionId uuid.UUID `json:"collectionId"`
- ColumnIdDisplay pgtype.UUID `json:"columnIdDisplay"` // ID of collection column to display (inputs etc.)
- MultiValue bool `json:"multiValue"` // if active, values of multiple record rows can be selected
- NoDisplayEmpty bool `json:"noDisplayEmpty"` // if collection is used for display and value is 'empty' (0, '', null), it is not shown
- OnMobile bool `json:"onMobile"` // if collection is used for display and mobile view is active, decides whether to show collection
- OpenForm OpenForm `json:"openForm"`
+ ColumnIdDisplay pgtype.UUID `json:"columnIdDisplay"` // ID of collection column to display
+ Flags []string `json:"flags"` // flags for options (showRowCount, multiValue, noDisplayEmpty, ...)
+
+ // presentation options (to show collection in header, menu, etc.)
+ OnMobile bool `json:"onMobile"` // show on mobile
+ OpenForm OpenForm `json:"openForm"` // open form when clicked on
+
+ // legacy
+ MultiValue bool `json:"multiValue"` // moved to flags
+ NoDisplayEmpty bool `json:"noDisplayEmpty"` // moved to flags
}
type Column struct {
Id uuid.UUID `json:"id"`
AttributeId uuid.UUID `json:"attributeId"`
Index int `json:"index"` // attribute index
- Batch pgtype.Int4 `json:"batch"` // index of column batch (multiple columns as one)
- Basis int `json:"basis"` // size basis (usually width)
- Length int `json:"length"` // text length limit (in characters)
- Wrap bool `json:"wrap"` // text wrap
- Display string `json:"display"` // how to display value (text, date, color, etc.)
GroupBy bool `json:"groupBy"` // group by column attribute value?
Aggregator pgtype.Text `json:"aggregator"` // aggregator (SUM, COUNT, etc.)
Distincted bool `json:"distincted"` // attribute values are distinct?
SubQuery bool `json:"subQuery"` // column uses sub query?
- OnMobile bool `json:"onMobile"` // display this column on mobile?
- Clipboard bool `json:"clipboard"` // show copy-to-clipboard action?
Query Query `json:"query"` // sub query
- Captions CaptionMap `json:"captions"`
+ Captions CaptionMap `json:"captions"` // column titles
+
+ // presentation
+ Basis int `json:"basis"` // size basis (usually width)
+ Batch pgtype.Int4 `json:"batch"` // index of column batch (multiple columns as one)
+ Display string `json:"display"` // how to display value (email, gallery, etc.)
+ Hidden bool `json:"hidden"` // hide column by default?
+ Length int `json:"length"` // text length limit (in characters)
+ OnMobile bool `json:"onMobile"` // display column on mobile by default?
+ Styles []string `json:"styles"` // alignEnd, alignMid, bold, clipboard, hide, italic, vertical, wrap
+
+ // legacy
+ BatchVertical bool `json:"batchVertical"`
+ Clipboard bool `json:"clipboard"`
+ Wrap bool `json:"wrap"`
}
type Role struct {
- Id uuid.UUID `json:"id"`
- ModuleId uuid.UUID `json:"moduleId"`
- ChildrenIds []uuid.UUID `json:"childrenIds"`
- Name string `json:"name"`
- Content string `json:"content"`
- Assignable bool `json:"assignable"`
- AccessApis map[uuid.UUID]int `json:"accessApis"`
- AccessAttributes map[uuid.UUID]int `json:"accessAttributes"`
- AccessCollections map[uuid.UUID]int `json:"accessCollections"`
- AccessMenus map[uuid.UUID]int `json:"accessMenus"`
- AccessRelations map[uuid.UUID]int `json:"accessRelations"`
- Captions CaptionMap `json:"captions"`
+ Id uuid.UUID `json:"id"`
+ ModuleId uuid.UUID `json:"moduleId"`
+ ChildrenIds []uuid.UUID `json:"childrenIds"`
+ Name string `json:"name"`
+ Content string `json:"content"`
+ Assignable bool `json:"assignable"`
+ AccessApis map[uuid.UUID]int `json:"accessApis"`
+ AccessAttributes map[uuid.UUID]int `json:"accessAttributes"`
+ AccessClientEvents map[uuid.UUID]int `json:"accessClientEvents"`
+ AccessCollections map[uuid.UUID]int `json:"accessCollections"`
+ AccessMenus map[uuid.UUID]int `json:"accessMenus"`
+ AccessRelations map[uuid.UUID]int `json:"accessRelations"`
+ AccessWidgets map[uuid.UUID]int `json:"accessWidgets"`
+ Captions CaptionMap `json:"captions"`
}
type PgFunction struct {
Id uuid.UUID `json:"id"`
@@ -449,8 +532,10 @@ type PgFunction struct {
CodeArgs string `json:"codeArgs"`
CodeFunction string `json:"codeFunction"`
CodeReturns string `json:"codeReturns"`
- IsFrontendExec bool `json:"isFrontendExec"` // can be executed from frontend
+ IsFrontendExec bool `json:"isFrontendExec"` // can be called from JS function
+ IsLoginSync bool `json:"isLoginSync"` // special login sync function
IsTrigger bool `json:"isTrigger"` // is relation TRIGGER function
+ Volatility string `json:"volatility"` // VOLATILE, STABLE, IMMUTABLE
Schedules []PgFunctionSchedule `json:"schedules"`
Captions CaptionMap `json:"captions"`
}
@@ -465,9 +550,10 @@ type PgFunctionSchedule struct {
}
type PgTrigger struct {
Id uuid.UUID `json:"id"`
+ ModuleId uuid.UUID `json:"moduleId"`
RelationId uuid.UUID `json:"relationId"`
PgFunctionId uuid.UUID `json:"pgFunctionId"`
- Fires string `json:"fires"`
+ Fires string `json:"fires"` // BEFORE/AFTER
OnDelete bool `json:"onDelete"`
OnInsert bool `json:"onInsert"`
OnUpdate bool `json:"onUpdate"`
@@ -478,12 +564,14 @@ type PgTrigger struct {
CodeCondition string `json:"codeCondition"`
}
type PgIndex struct {
- Id uuid.UUID `json:"id"`
- RelationId uuid.UUID `json:"relationId"`
- NoDuplicates bool `json:"noDuplicates"` // index is unique
- AutoFki bool `json:"autoFki"` // index belongs to foreign key attribute (auto-generated)
- PrimaryKey bool `json:"primaryKey"` // index belongs to primary key attribute
- Attributes []PgIndexAttribute `json:"attributes"` // attributes the index is made of
+ Id uuid.UUID `json:"id"`
+ RelationId uuid.UUID `json:"relationId"`
+ AttributeIdDict pgtype.UUID `json:"attributeIdDict"` // attribute used as dictionary for full text search (if set, GIN is used)
+ Method string `json:"method"` // BTREE/GIN
+ NoDuplicates bool `json:"noDuplicates"` // index is unique
+ AutoFki bool `json:"autoFki"` // index belongs to foreign key attribute (auto-generated)
+ PrimaryKey bool `json:"primaryKey"` // index belongs to primary key attribute
+ Attributes []PgIndexAttribute `json:"attributes"` // attributes the index is made of
}
type PgIndexAttribute struct {
PgIndexId uuid.UUID `json:"pgIndexId"`
@@ -492,14 +580,15 @@ type PgIndexAttribute struct {
OrderAsc bool `json:"orderAsc"`
}
type JsFunction struct {
- Id uuid.UUID `json:"id"`
- ModuleId uuid.UUID `json:"moduleId"`
- FormId pgtype.UUID `json:"formId"`
- Name string `json:"name"`
- CodeArgs string `json:"codeArgs"`
- CodeFunction string `json:"codeFunction"`
- CodeReturns string `json:"codeReturns"`
- Captions CaptionMap `json:"captions"`
+ Id uuid.UUID `json:"id"`
+ ModuleId uuid.UUID `json:"moduleId"`
+ FormId pgtype.UUID `json:"formId"`
+ Name string `json:"name"`
+ CodeArgs string `json:"codeArgs"`
+ CodeFunction string `json:"codeFunction"`
+ CodeReturns string `json:"codeReturns"`
+ IsClientEventExec bool `json:"isClientEventExec"` // can be executed from client events
+ Captions CaptionMap `json:"captions"`
}
type Tab struct {
Id uuid.UUID `json:"id"`
@@ -509,6 +598,38 @@ type Tab struct {
Fields []interface{} `json:"fields"` // fields assigned to tab
Captions CaptionMap `json:"captions"`
}
+type ClientEvent struct {
+ Id uuid.UUID `json:"id"`
+ ModuleId uuid.UUID `json:"moduleId"`
+ Action string `json:"action"` // action to execute when event is running (callJsFunction, callPgFunction)
+ Arguments []string `json:"arguments"` // arguments to supply to function call (clipboard, hostname, username, windowTitle)
+ Event string `json:"event"` // events to react to (onConnect, onDisconnect, onHotkey)
+ HotkeyChar string `json:"hotkeyChar"` // single character
+ HotkeyModifier1 string `json:"hotkeyModifier1"` // ALT, CMD, CTRL, SHIFT
+ HotkeyModifier2 pgtype.Text `json:"hotkeyModifier2"` // ALT, CMD, CTRL, SHIFT (optional)
+ JsFunctionId pgtype.UUID `json:"jsFunctionId"`
+ PgFunctionId pgtype.UUID `json:"pgFunctionId"`
+ Captions CaptionMap `json:"captions"`
+}
+type Variable struct {
+ Id uuid.UUID `json:"id"`
+ ModuleId uuid.UUID `json:"moduleId"`
+ FormId pgtype.UUID `json:"formId"` // if assigned to form, otherwise global
+ Name string `json:"name"`
+ Comment pgtype.Text `json:"comment"` // author comment
+ Content string `json:"content"` // for display as field input, no other purpose
+ ContentUse string `json:"contentUse"` // for display as field input, no other purpose
+ Def pgtype.Text `json:"def"` // default value
+}
+type Widget struct {
+ Id uuid.UUID `json:"id"`
+ ModuleId uuid.UUID `json:"moduleId"`
+ FormId pgtype.UUID `json:"formId"`
+ Name string `json:"name"`
+ Size int `json:"size"`
+ Collection CollectionConsumer `json:"collection"` // collection to display
+ Captions CaptionMap `json:"captions"`
+}
type Deletion struct {
Id uuid.UUID `json:"id"`
Entity string `json:"entity"`
diff --git a/types/types_schema_query.go b/types/types_schema_query.go
index 23f1b07d..3e402ad4 100644
--- a/types/types_schema_query.go
+++ b/types/types_schema_query.go
@@ -12,13 +12,13 @@ var (
QueryFilterConnectors = []string{"AND", "OR"}
QueryFilterOperators = []string{"=", "<>", "<", ">", "<=", ">=", "IS NULL",
"IS NOT NULL", "LIKE", "ILIKE", "NOT LIKE", "NOT ILIKE", "= ANY",
- "<> ALL", "@>", "<@", "&&"}
+ "<> ALL", "@>", "<@", "&&", "@@", "~", "~*", "!~", "!~*"}
)
// a query starts at a relation to retrieve attribute values
// it can join other relations via relationship attributes from both sides
-// each relation (original and joined) is refered via an unique index (simple counter)
-// because the same relation can join multiple times, an unique index is required to know which relation is refered to
+// each relation (original and joined) is referred via an unique index (simple counter)
+// because the same relation can join multiple times, an unique index is required to know which relation is referred to
// via indexes, joins know their source (index from), filters can refer to attributes from specific relations, etc.
type Query struct {
Id uuid.UUID `json:"id"`
@@ -54,6 +54,7 @@ type QueryJoin struct {
type QueryFilter struct {
Connector string `json:"connector"` // AND, OR
Operator string `json:"operator"` // comparison operator (=, <>, etc.)
+ Index int `json:"index"` // relation index to apply filter to (0 = filter query, 1+ = filter relation join)
Side0 QueryFilterSide `json:"side0"` // comparison: left side
Side1 QueryFilterSide `json:"side1"` // comparison: right side
}
@@ -75,6 +76,7 @@ type QueryFilterSide struct {
FieldId pgtype.UUID `json:"fieldId"` // frontend field value
PresetId pgtype.UUID `json:"presetId"` // preset ID of record to be compared
RoleId pgtype.UUID `json:"roleId"` // role ID assigned to user
+ VariableId pgtype.UUID `json:"variableId"` // variable ID of value to compare
NowOffset pgtype.Int4 `json:"nowOffset"` // offset in seconds (+/-) for now* content (e. g. nowDatetime - 86400 -> last day)
}
diff --git a/types/types_transaction.go b/types/types_transaction.go
index 9d742a67..1763f7fd 100644
--- a/types/types_transaction.go
+++ b/types/types_transaction.go
@@ -8,7 +8,7 @@ type Request struct {
Payload json.RawMessage `json:"payload"`
}
type RequestTransaction struct {
- TransactionNr uint64 `json:"transactionNr"` // for websocket client to match asynchronous reponse to original request
+ TransactionNr uint64 `json:"transactionNr"` // for websocket client to match asynchronous response to original request
Requests []Request `json:"requests"` // all websocket client requests
}
type Response struct {
diff --git a/types/types_user.go b/types/types_user.go
deleted file mode 100644
index f5089ae2..00000000
--- a/types/types_user.go
+++ /dev/null
@@ -1,25 +0,0 @@
-package types
-
-import "github.com/jackc/pgx/v5/pgtype"
-
-type Settings struct {
- BordersAll bool `json:"bordersAll"`
- BordersCorner string `json:"bordersCorner"`
- Compact bool `json:"compact"`
- DateFormat string `json:"dateFormat"`
- Dark bool `json:"dark"`
- FieldClean bool `json:"fieldClean"`
- FontFamily string `json:"fontFamily"`
- FontSize int `json:"fontSize"`
- HeaderCaptions bool `json:"headerCaptions"`
- HintUpdateVersion int `json:"hintUpdateVersion"`
- LanguageCode string `json:"languageCode"`
- MenuColored bool `json:"menuColored"`
- MobileScrollForm bool `json:"mobileScrollForm"`
- PageLimit int `json:"pageLimit"`
- Pattern pgtype.Text `json:"pattern"`
- Spacing int `json:"spacing"`
- SundayFirstDow bool `json:"sundayFirstDow"`
- TabRemember bool `json:"tabRemember"`
- WarnUnsaved bool `json:"warnUnsaved"`
-}
diff --git a/types/types_websocket.go b/types/types_websocket.go
new file mode 100644
index 00000000..bca9d341
--- /dev/null
+++ b/types/types_websocket.go
@@ -0,0 +1,13 @@
+package types
+
+type WebsocketClientDevice int
+
+var (
+ WebsocketClientDeviceBrowser WebsocketClientDevice = 1
+ WebsocketClientDeviceFatClient WebsocketClientDevice = 2
+
+ WebsocketClientDeviceNames = map[WebsocketClientDevice]string{
+ WebsocketClientDeviceBrowser: "browser",
+ WebsocketClientDeviceFatClient: "fatClient",
+ }
+)
diff --git a/www/comps/admin/admin.css b/www/comps/admin/admin.css
index 524f6d4d..b2022450 100644
--- a/www/comps/admin/admin.css
+++ b/www/comps/admin/admin.css
@@ -8,124 +8,52 @@
font-style:italic;
text-align:center;
}
-
-.admin .table-default-wrap{
- flex:1 1 auto;
- overflow-y:auto;
-}
-.admin .table-default{
- width:100%;
-}
-.admin .table-default>thead>tr>th{
- height:36px;
- padding:3px 8px;
- border-bottom:1px solid var(--color-border);
- background-color:var(--color-bg);
-}
-.admin .table-default>thead>tr>th.gab{
- width:200px;
-}
-.admin .table-default>thead>tr>th.left-border{
- border-left:3px double var(--color-border);
-}
-.admin .table-default>thead>tr>th .mixed-header{
- display:flex;
- flex-flow:row nowrap;
- align-items:center;
-}
-.admin .table-default>thead>tr>th .mixed-header>img{
- width:16px;
- height:16px;
- margin-right:6px;
+.admin .module-icon{
+ width:24px;
+ margin-right:9px;
filter:var(--image-filter);
}
-.admin .table-default>tbody>tr>td{
- padding:3px 8px;
- border-top:1px solid var(--color-border);
- background-color:var(--color-bg);
-}
-.admin .table-default>tbody>tr>td.left-border{
- border-left:3px double var(--color-border);
-}
-.admin .table-default.no-padding>tbody>tr>td{
- padding:3px 2px 3px 4px;
-}
/* system logs */
-.admin .logs{
+.admin-logs{
display:flex;
flex-direction:column;
flex:1 1 auto;
}
-.admin .logs .actions{
- flex:0 0 auto;
- padding:12px;
- display:flex;
- flex-flow:row wrap;
- justify-content:space-between;
- background-color:var(--color-bg);
- gap:6px;
-}
-.admin .logs .actions .action-bar{
- display:flex;
- flex-flow:row nowrap;
- align-items:center;
- gap:6px;
-}
-.admin .logs .actions .right-bar{
- display:flex;
- flex-flow:column nowrap;
- gap:8px;
-}
-.admin .logs .actions .right-bar input,
-.admin .logs .actions .right-bar select{
- width:auto;
- min-width:unset;
-}
-.admin .logs .actions .entry{
- flex:0 1 200px;
-}
-.admin .logs .actions .entry input{
- border:none;
-}
-.admin .logs .actions .input-date{
- flex:0 1 300px;
- position:relative;
-}
-.admin .logs .level-indicator{
+.admin-logs .level-indicator{
width:6px;
height:16px;
margin-right:5px;
border:1px solid var(--color-border);
border-radius:2px;
}
+.admin-logs-date-wrap{
+ position:relative;
+}
/* logins */
.admin-logins .login-record{
max-width:400px;
- display:flex;
- flex-flow:row nowrap;
- align-items:center;
}
-.admin-logins .login-record-input{
- flex:1 1 auto;
- position:relative;
- display:flex;
- flex-flow:row nowrap;
- align-items:center;
-}
-.admin-logins .login-record-input input{
- margin-right:5px;
-}
-.admin-logins .module-icon{
+.admin-logins img.line-icon{
width:24px;
- margin-right:9px;
+ height:24x;
filter:var(--image-filter);
}
+.admin-logins-list tr:hover td{
+ background-color:var(--color-accent3) !important;
+}
+.admin-logins td.loginName{
+ width:500px;
+}
+.admin-logins td.bools{
+ width:200px;
+}
.admin-login{
- min-width:1000px;
- max-width:1200px;
+ width:95%;
+ min-width:600px;
+ max-width:1000px;
min-height:600px;
overflow:auto;
}
@@ -133,9 +61,31 @@
display:flex;
flex-flow:column nowrap;
}
-.admin-login .role-select{
+.admin-login .login-details{
margin-top:5px;
- border-bottom:1px solid var(--color-border);
+ border-top:1px solid var(--color-border);
+ overflow:auto;
+}
+.admin-login .login-details-tabs{
+ overflow:hidden;
+}
+.admin-login .login-details .login-details-content{
+ flex:1 1 auto;
+ min-height:600px;
+}
+.admin-login .login-details .login-details-content.roles{
+ padding:0px;
+ min-height:630px;
+}
+.admin-login .login-details-login-form-input{
+ min-width:300px;
+}
+.admin-login-meta{
+ width:100%;
+ max-width:680px;
+}
+.admin-login-meta td{
+ padding:5px 10px;
}
.admin-login .role-select td{
padding:0px 4px !important;
@@ -161,6 +111,91 @@
margin-right:6px;
filter:var(--image-filter);
}
+.admin-login .message.error{
+ color:var(--color-error);
+}
+
+/* login sessions */
+.admin-sessions img.line-icon{
+ width:24px;
+ height:24px;
+ filter:var(--image-filter);
+}
+
+/* system message */
+.admin-system-msg{}
+.admin-system-msg-date{
+ width:370px;
+ padding:3px 6px;
+ position:relative;
+ border:var(--border-input);
+ border-radius:var(--border-input-radius);
+ box-shadow:var(--shadow-input);
+ background-color:var(--color-input);
+}
+.admin-system-msg-date:focus-within{
+ border:var(--border-input-focus);
+ outline:var(--outline-input-focus);
+ box-shadow:var(--shadow-input-focus);
+}
+.admin-system-msg-table{
+ width:1000px;
+}
+.admin-system-msg-text{
+ height:600px;
+ margin:6px 12px;
+ display:flex;
+ border:var(--border-input);
+ border-radius:var(--border-input-radius);
+ box-shadow:var(--shadow-input);
+ background-color:var(--color-input);
+}
+
+/* customizing */
+.admin-custom .cssInput{
+ max-width:unset;
+ width:calc(100% - 20px);
+ height:900px;
+ margin:10px;
+ border:var(--border-input);
+ border-radius:var(--border-input-radius-large);
+ box-shadow:var(--shadow-input);
+ overflow:hidden;
+ display:flex;
+ flex-flow:column nowrap;
+}
+.admin-custom .companyWelcome{
+ height:180px;
+}
+.admin-custom .logo{
+ object-fit:contain;
+ height:60px;
+ border:1px solid var(--color-border);
+ border-radius:3px;
+}
+.admin-custom .colorInputWrap{
+ display:flex;
+ max-width:300px;
+ gap:6px;
+}
+.admin-custom .colorInputWrap .preview{
+ width:30px;
+ height:30px;
+ flex:0 0 auto;
+ border-radius:4px;
+ background-color:none;
+ box-sizing:border-box;
+ border:1px solid var(--color-border);
+}
+.admin-custom .colorInputWrap .preview img{
+ width:24px;
+ height:24px;
+ margin:3px;
+ filter:var(--image-filter);
+}
+.admin-custom .colorInputWrap .preview img.active{
+ filter:var(--image-filter-bg);
+}
/* login template */
@@ -175,6 +210,18 @@
}
+/* OAuth client */
+.admin-oauth-client{}
+.admin-oauth-client-date-wrap{
+ min-width:330px !important;
+ min-height:var(--row-height);
+ position:relative;
+ display:flex;
+ gap:9px;
+ flex-flow:row nowrap;
+}
+
+
/* backups */
.admin-backups .note{
max-width:500px;
@@ -200,31 +247,6 @@
.admin-config table{
width:100%;
}
-.admin-config .companyWelcome{
- height:70px;
-}
-.admin-config .logo{
- object-fit:contain;
- height:60px;
- margin:10px 0px 0px;
- padding:3px;
- border:1px solid var(--color-border);
- border-radius:3px;
-}
-.admin-config .colorInputWrap{
- display:flex;
- max-width:300px;
-}
-.admin-config .colorInputWrap .preview{
- width:30px;
- height:30px;
- flex:0 0 auto;
- margin-left:4px;
- border-radius:4px;
- background-color:none;
- box-sizing:border-box;
- border:1px solid var(--color-border);
-}
.admin-config .backup-dir td{
padding-bottom:20px;
}
@@ -240,19 +262,48 @@
justify-content:space-between;
margin:0px 0px 5px 0px;
}
-.admin-config .repo-key-input{
- display:flex;
- flex-flow:column nowrap;
- align-items:flex-start;
-}
-.admin-config .repo-key-input input,
-.admin-config .repo-key-input textarea{
- min-width:350px;
-}
.admin-config .mail-test-input{
display:flex;
flex-flow:row nowrap;
}
+.admin-config .login-bg{
+ padding:10px;
+ border-radius:5px;
+ background-color:var(--color-input);
+ display:flex;
+ flex-flow:row wrap;
+ gap:10px;
+}
+.admin-config .login-bg .preview{
+ width:120px;
+ height:80px;
+ background-repeat:no-repeat;
+ background-size:cover;
+ border-radius:3px;
+ box-shadow:1px 1px 3px var(--color-shade);
+ box-sizing:border-box;
+ border:3px solid #fff;
+ transition:width 0.2s, height 0.2s, margin 0.2s, filter 0.2s;
+ filter:saturate(90%);
+}
+.admin-config .login-bg .preview:hover{
+ border-width:5px;
+ filter:saturate(110%);
+}
+.admin-config .login-bg .preview.inactive{
+ width:90px;
+ height:50px;
+ margin:15px;
+ filter:saturate(10%) brightness(70%);
+ border-width:1px;
+ border-color:#000;
+}
+.admin-config .login-bg .preview.inactive:hover{
+ width:100px;
+ height:60px;
+ margin:10px;
+ filter:saturate(50%) brightness(90%);
+}
/* license */
@@ -261,32 +312,57 @@
flex-flow:row nowrap;
justify-content:space-between;
align-items:center;
+ position:relative;
max-width:520px;
padding:0px 16px;
- margin:0px 0px 12px 0px;
+ margin:0px 0px 24px 0px;
color:var(--color-font);
- border:2px solid var(--color-border);
- border-radius:5px;
- background-color:var(--color-bg);
+ background-color:var(--color-bright);
+ border:var(--border-input);
+ border-radius:var(--border-input-radius);
+ box-shadow:var(--shadow-input);
}
-.admin-license .file img{
+.admin-license .file>img{
width:auto;
height:120px;
margin-left:12px;
}
-.admin-license table{
+.admin-license .file .actions{
+ position:absolute;
+ top:6px;
+ right:6px;
+}
+.admin-license .file table{
margin:6px 0px;
}
-.admin-license table td{
- padding:5px 32px 5px 0px;
+.admin-license .file table td{
+ padding:5px 20px 5px 0px;
}
.admin-license .invalid{
- font-size:130%;
color:var(--color-error);
}
-.admin-license .valid{
- font-size:130%;
- color:var(--color-success);
+.admin-license .intro{
+ display:flex;
+ flex-flow:row wrap;
+ padding:10px 0px 0px;
+ margin:0px 0px 20px;
+ gap:30px;
+}
+.admin-license .intro span{
+ max-width:800px;
+ min-width:400px;
+ margin:0px 0px 20px;
+ flex:0 1 auto;
+ font-size:120%;
+ line-height:160%;
+}
+.admin-license .intro img{
+ width:300px;
+ height:200px;
+}
+.admin-license .current-values td{
+ padding:5px 8px;
+ font-size:120%;
}
@@ -296,11 +372,6 @@
display:flex;
flex-direction:column;
}
-.admin-modules .module-icon{
- width:24px;
- margin-right:8px;
- filter:var(--image-filter);
-}
.admin-modules .message{
margin:0px;
padding:15px 20px;
@@ -340,6 +411,7 @@
.admin-ldaps .entry-actions{
display:flex;
flex-flow:row nowrap;
+ gap:calc(var(--spacing) / 2);
margin:5px 0px 12px;
}
@@ -355,13 +427,15 @@
flex-flow:row nowrap;
max-width:1200px;
margin:0px 12px 12px 0px;
- border:2px solid var(--color-border);
- border-radius:4px;
+ border:var(--border-input);
+ border-radius:var(--border-input-radius);
+ box-shadow:var(--shadow-input);
+ background-color:var(--color-bright);
+ overflow:hidden;
}
.admin-repo .repo-module .part{
display:flex;
flex-flow:column nowrap;
- background-color:var(--color-bg);
}
.admin-repo .repo-module .bad-state{
color:var(--color-error);
@@ -401,11 +475,26 @@
display:flex;
flex-flow:column;
align-items:flex-start;
- border-left:2px solid var(--color-border);
+ border-left:1px solid var(--color-border);
flex:0 1 300px;
}
+/* logs */
+.admin-logs{}
+.admin-logs-content{
+ display:flex;
+ flex-flow:column nowrap;
+}
+.admin-logs-settings{
+ margin:16px;
+}
+.admin-logs-table{
+ flex:1 1 auto;
+ overflow:auto;
+}
+
+
/* roles */
.admin-roles .content{
flex:1 1 auto;
@@ -440,13 +529,14 @@
overflow-x:hidden;
color:var(--color-font);
border-bottom:1px solid var(--color-border);
- background-color:var(--color-bg);
+ background-color:var(--color-input);
}
.admin-roles .admin-role-members .entry{
display:flex;
flex-flow:row nowrap;
border-bottom:1px dotted var(--color-border);
align-items:center;
+ gap:6px;
padding:2px 5px;
position:relative;
}
@@ -459,7 +549,7 @@
border:none;
outline:none;
color:var(--color-font);
- background-color:var(--color-bg);
+ background-color:var(--color-input);
}
@@ -472,8 +562,6 @@
}
.admin-files tr.attribute-title td{
font-weight:bold;
- padding-top:16px;
- padding-bottom:16px;
}
@@ -481,23 +569,34 @@
.admin-scheduler table{
max-width:1300px;
}
+.admin-scheduler .message{
+ margin:0px;
+ padding:15px 20px;
+ background-color:var(--color-bg);
+}
+.admin-scheduler .message.error{
+ color:var(--color-error);
+}
-/* mails */
-.admin-mails{}
-.admin-mails .row-actions{
+/* mail spooler */
+.admin-mail-spooler{}
+.admin-mail-spooler .mail-testing{
display:flex;
flex-flow:row nowrap;
}
-.admin-mails .mail-testing{
- display:flex;
- flex-flow:row nowrap;
-}
-.admin-mails .mail-testing h1{
+.admin-mail-spooler .mail-testing h1{
margin-right:9px;
}
+/* mail traffic */
+.admin-mail-traffic{}
+.admin-mail-traffic-settings{
+ margin:16px;
+}
+
+
/* cluster */
.admin-cluster .config{
max-width:400px !important;
@@ -521,9 +620,10 @@
padding:22px 12px;
display:flex;
flex-flow:column nowrap;
- border:1px solid var(--color-border);
- border-radius:5px;
- background-color:var(--color-bg);
+ border:var(--border-input);
+ border-radius:var(--border-input-radius);
+ box-shadow:var(--shadow-input);
+ background-color:var(--color-bright);
position:relative;
}
.admin-cluster-node img.server{
@@ -544,6 +644,10 @@
top:10px;
right:10px;
}
+.admin-cluster-node .icons.left{
+ right:unset;
+ left:10px;
+}
.admin-cluster-node .icons img.status{
width:32px;
height:32px;
diff --git a/www/comps/admin/admin.js b/www/comps/admin/admin.js
index abb558f7..a38e8d5f 100644
--- a/www/comps/admin/admin.js
+++ b/www/comps/admin/admin.js
@@ -7,119 +7,155 @@ let MyAdmin = {
MyAdminDocs,
},
template:`
-
-
-
-
-
-

-
{{ capApp.title }}
-
-
-
-
-
-
-
-
-
-
- {{ capApp.navigationConfig }}
-
-
-
-
-
- {{ capApp.navigationLogins }}
-
-
-
-
-
- {{ capApp.navigationRoles }}
-
-
-
-
-
- {{ capApp.navigationLoginTemplates }}
-
-
-
-
-
- {{ capApp.navigationLdaps }}
-
-
-
-
-
- {{ capApp.navigationModules }}
-
-
-
-
-
- {{ capApp.navigationRepo }}
-
-
-
-
-
- {{ capApp.navigationMailAccounts }}
-
-
-
-
-
- {{ capApp.navigationMails }}
-
-
-
-
-
- {{ capApp.navigationBackups }}
-
-
-
-
-
- {{ capApp.navigationCluster }}
-
-
-
-
-
- {{ capApp.navigationFiles }}
-
-
-
-
-
- {{ capApp.navigationLogs }}
-
-
-
-
-
- {{ capApp.navigationScheduler }}
-
-
-
-
-
- {{ capApp.navigationLicense }}
-
+
+
+
+
+
+
+ {{ capApp.navigationConfig }}
+
+
+
+
+
+ {{ capApp.navigationLogins }}
+
+
+
+
+
+ {{ capApp.navigationLoginSessions }}
+
+
+
+
+
+ {{ capApp.navigationLoginTemplates }}
+
+
+
+
+
+ {{ capApp.navigationRoles }}
+
+
+
+
+
+ {{ capApp.navigationModules }}
+
+
+
+
+
+ {{ capApp.navigationRepo }}
+
+
+
+
+
+ {{ capApp.navigationMailAccounts }}
+
+
+
+
+
+ {{ capApp.navigationMailSpooler }}
+
+
+
+
+
+ {{ capApp.navigationMailTraffic }}
+
+
+
+
+
+ {{ capApp.navigationBackups }}
+
+
+
+
+
+ {{ capApp.navigationFiles }}
+
+
+
+
+
+ {{ capApp.navigationLogs }}
+
+
+
+
+
+ {{ capApp.navigationScheduler }}
+
+
+
+
+
+ {{ capApp.navigationCaptionMap }}
+
+
+
+
+

+
{{ licenseTitle }}
+
+
+
+
+ {{ capApp.navigationActivation }}
+
+
+
+
+
+ {{ capApp.navigationSystemMsg }}
+
+
+
+
+
+ {{ capApp.navigationCustom }}
+
+
+
+
+
+ {{ capApp.navigationLdaps }}
+
+
+
+
+
+ {{ capApp.navigationOauthClients }}
+
+
+
+
+
+ {{ capApp.navigationCluster }}
+
@@ -132,10 +168,15 @@ let MyAdmin = {
$route(val) {
if(val.hash === '')
this.showDocs = false;
+
+ if(this.activated && (val.path.includes('license') || val.path.includes('login-sessions')))
+ this.getConcurrentLogins();
}
},
data() {
return {
+ concurrentLogins:0, // count of concurrent logins (full)
+ concurrentLoginsLimited:0, // count of concurrent logins (limited)
ready:false,
showDocs:false
};
@@ -144,32 +185,58 @@ let MyAdmin = {
if(!this.isAdmin)
return this.$router.push('/');
+ this.getConcurrentLogins();
this.ready = true;
- this.$store.commit('moduleColor1','');
},
computed:{
contentTitle:(s) => {
- if(s.$route.path.includes('backups')) return s.capApp.navigationBackups;
- if(s.$route.path.includes('cluster')) return s.capApp.navigationCluster;
- if(s.$route.path.includes('config')) return s.capApp.navigationConfig;
- if(s.$route.path.includes('docs')) return s.capApp.navigationDocs;
- if(s.$route.path.includes('files')) return s.capApp.navigationFiles;
- if(s.$route.path.includes('license')) return s.capApp.navigationLicense;
- if(s.$route.path.includes('logins')) return s.capApp.navigationLogins;
- if(s.$route.path.includes('logintemplates')) return s.capApp.navigationLoginTemplates;
- if(s.$route.path.includes('logs')) return s.capApp.navigationLogs;
- if(s.$route.path.includes('ldaps')) return s.capApp.navigationLdaps;
- if(s.$route.path.includes('mailaccounts')) return s.capApp.navigationMailAccounts;
- if(s.$route.path.includes('mails')) return s.capApp.navigationMails;
- if(s.$route.path.includes('modules')) return s.capApp.navigationModules;
- if(s.$route.path.includes('repo')) return s.capApp.navigationRepo;
- if(s.$route.path.includes('roles')) return s.capApp.navigationRoles;
- if(s.$route.path.includes('scheduler')) return s.capApp.navigationScheduler;
+ if(s.$route.path.includes('backups')) return s.capApp.navigationBackups;
+ if(s.$route.path.includes('caption-map')) return s.capApp.navigationCaptionMap;
+ if(s.$route.path.includes('cluster')) return s.capApp.navigationCluster;
+ if(s.$route.path.includes('config')) return s.capApp.navigationConfig;
+ if(s.$route.path.includes('custom')) return s.capApp.navigationCustom;
+ if(s.$route.path.includes('docs')) return s.capApp.navigationDocs;
+ if(s.$route.path.includes('files')) return s.capApp.navigationFiles;
+ if(s.$route.path.includes('license')) return s.capApp.navigationActivation;
+ if(s.$route.path.includes('logins')) return s.capApp.navigationLogins;
+ if(s.$route.path.includes('login-sessions')) return s.capApp.navigationLoginSessions;
+ if(s.$route.path.includes('login-templates')) return s.capApp.navigationLoginTemplates;
+ if(s.$route.path.includes('logs')) return s.capApp.navigationLogs;
+ if(s.$route.path.includes('ldaps')) return s.capApp.navigationLdaps;
+ if(s.$route.path.includes('mail-accounts')) return s.capApp.navigationMailAccounts;
+ if(s.$route.path.includes('mail-spooler')) return s.capApp.navigationMailSpooler;
+ if(s.$route.path.includes('mail-traffic')) return s.capApp.navigationMailTraffic;
+ if(s.$route.path.includes('modules')) return s.capApp.navigationModules;
+ if(s.$route.path.includes('oauth-clients')) return s.capApp.navigationOauthClients;
+ if(s.$route.path.includes('repo')) return s.capApp.navigationRepo;
+ if(s.$route.path.includes('roles')) return s.capApp.navigationRoles;
+ if(s.$route.path.includes('scheduler')) return s.capApp.navigationScheduler;
+ if(s.$route.path.includes('system-msg')) return s.capApp.navigationSystemMsg;
return '';
},
+ licenseTitle:(s) => !s.activated
+ ? s.capApp.navigationLicense
+ :`${s.capApp.navigationLicense} (${s.concurrentLogins}/${s.license.loginCount} - ${s.concurrentLoginsLimited}/${s.license.loginCount * s.limitedFactor})`,
// stores
- capApp: (s) => s.$store.getters.captions.admin,
- isAdmin:(s) => s.$store.getters.isAdmin
+ activated: (s) => s.$store.getters['local/activated'],
+ bgStyle: (s) => s.$store.getters.colorMenuStyle,
+ capApp: (s) => s.$store.getters.captions.admin,
+ colorMenu: (s) => s.$store.getters.colorMenu,
+ isAdmin: (s) => s.$store.getters.isAdmin,
+ license: (s) => s.$store.getters.license,
+ limitedFactor:(s) => s.$store.getters.constants.loginLimitedFactor
+ },
+ methods:{
+ // backend calls
+ getConcurrentLogins() {
+ ws.send('loginSession','getConcurrent',{},true).then(
+ res => {
+ this.concurrentLogins = res.payload.full;
+ this.concurrentLoginsLimited = res.payload.limited;
+ },
+ this.$root.genericError
+ );
+ }
}
};
\ No newline at end of file
diff --git a/www/comps/admin/adminBackups.js b/www/comps/admin/adminBackups.js
index 8a10982c..e0947de3 100644
--- a/www/comps/admin/adminBackups.js
+++ b/www/comps/admin/adminBackups.js
@@ -14,7 +14,7 @@ let MyAdminBackups = {
@@ -32,43 +32,45 @@ let MyAdminBackups = {
{{ capApp.dirNote }}
@@ -78,7 +80,7 @@ let MyAdminBackups = {
{{ capApp.list }}
-