We’ve developed some resources to help you work effectively from home during COVID-19 Click to learn more

Continuous Database

How to Create Tables in the SuperOffice Database for Version 8.1 and Higher
Betyg

Introduction

Long gone are the days when software is built using a fixed Waterfall approach of establishing product requirements, designing software architecture, coding an implementation, verifying results and finally entering the maintenance mode.

Nearly all software projects these days have adopted agile methodologies that take an iterative approach; where there are no permanent requirements and instead the incremental evolutionary product lifecycle is the standard.

Unlike most software applications upgrade paths, which simply replace existing application files, deploying database structural changes is hard, because the data in the tables cannot be thrown away when a new structure is deployed. Harder still are changes requiring both structural and data changes. Another problem is maintaining the change logs of specific software versions and associated database versions. There is no easy way of documenting which software versions depend on which database schema version.

Therefore, based on principles set forth by evolutionary database design, Continuous Database (CD) is the latest SuperOffice process for instrumenting incremental changes toward the SuperOffice database that enables changes in a continuous way. This new process is a way both SuperOffice and third-parties can continuously update a database schema that reflect ever-changing business requirements without ever having to depend on a hardcoded fixed-system again.

Getting Started

Before diving into details, let’s first establish a point of reference. You are going to start off by interacting with a SuperOffice database that is either pre-version 8.1, version 8.1 or post-version 8.1, and one that either does or does not have third-party tables. Under those assumptions, you are most likely to encounter one or more of the following scenarios:

  1. Pre-SuperOffice 8.1 database with third-party tables
  2. Pre-SuperOffice 8.1 database without third-party tables.
  3. SuperOffice 8.1 or higher with third-party tables.
  4. SuperOffice 8.1 or higher without third-party tables.

With respect to both #1 and #2, we recommend using ServerSetup to upgrade customer installations. This will upgrade both the customers’ installation and database to the latest continuous database process.

If customers do not upgrade to SuperOffice 8.1, the only available option for third-party tables is continued use of the legacy Dictionary SDK to create new or modified existing third-party tables.

With respect to #3 and #4, all third-parties must come to accept, understand and adopt the continuous database processes – as the remainder of this article presents.

Fundamentals

So how does SuperOffice isolate itself from unpredictable database changes? From version 8.1, SuperOffice creates an in-memory model of the database from the one stored in the DatabaseModel table. The model is a direct representation of what tables physically exists in the database. The database model also contains a list of the latest schema changes that have been applied to the database as a list of dictionary steps.

CD defines database variations in two dimensions.  Individual features refer to step names.  Each feature, or step name, will have individual steps for each version of the feature.  Teams can work in-parallel on their features and produce steps in numbered series.

 

SuperOffice CRM demonstrates multiple features being developed in parallel.  Even though this is not exposed for customizations by partners, the concept of features being developed in parallel is verified, the same way as if it were partners developing customizations in different versions in parallel.  The current version of SuperOffice CRM Online consist of 9 different features with 35+ steps.

For each table, field, index and relation definition in the database model, there is a reference to the dictionary step responsible for its creation, as well as the last step update. This is useful for tracing artifact changes and origin.

DictionaryStep Overview

A dictionary step is responsible for defining a list of schema changes and optionally importing priming data. Schema changes are actions, such as a new table, new fields in an existing table, new indexes and many more.

Database Model

Each dictionary step is uniquely identified by its name and step number combination. While the name is generally associated with a product name or feature, the step number is usually equal to an iteration. The step number is used to indicate in which order each dictionary step is applied to the database model to ensure includes all necessary changes a present and accounted for.

The dictionary step description should represent a general description of what changes are performed by the dictionary step.

Let’s look at an example. Suppose a vendor called Uno creates a dictionary step that adds a string field to the contact table that is to be 25 characters in length. As seen in the figure below, the initial dictionary step number to perform that action is defined as having a StepNumber set to 1. Next, suppose Uno decides to change the string field property to support 255 characters in length. The third-party must then define a new dictionary step that sets the StepNumber to 2.

A second example is when there are two third-party integrations that make database schema changes. In addition to the previously mentioned Uno, third-party Duo comes along and adds a field to the contact table. Duo’s dictionary step must be uniquely named and the step number is then 1.

The dictionary step state property is used to indicate whether this dictionary step is an “InDevelopment” or “Released” state.

Note that third-parties must respect each other and only change tables and fields they themselves have created. Third parties should also do their best to prevent field naming collisions and use a suitable prefix for their tables and fields.

The process of making changes to the database is called “Application of dictionary steps”. Since dictionary steps contain actions such as “Add field” or “Add table”, applying them means making that change – simultaneously – to the database model and the physical database. The steps themselves are not stored in the database (only their names and numbers, for tracking purposes).

A dictionary step can only be applied once; steps with the same name are applied strictly consecutively (no gaps); each chain of steps that share a name has to start with step 1.

The result of applying steps is a changed database schema, and a corresponding DatabaseModel that describes the changed schema, and thus the code can know what the database now looks like.

Uninstalling a DictionaryStepInfo from the model is accomplished by creating a DictionaryStep with the StepNumber set to Integer.MaxValue, or 2147483647. It's your responsibility to implement the class and completely remove all structural changes every made by your dictionary step.

Dictionary Step Implementation

So far, the explanation of a dictionary step has only included the concept of what it is and how it contributes towards smooth evolutionary database design. So how are they defined? On one hand, there is the definition of the dictionary step and on the other there is the implementation.

In terms of API dependencies, third-parties must create a .NET assembly that references two SuperOffice assemblies for Continuous Database:

  • SuperOffice.CD.DSL.dll
  • SuperOffice.CD.DSL.Implementation.dll

Third-parties must create a class decorated with a DictionaryAttribute attribute, and the class must inherit from the base class DictionaryStep. Both the DictionaryAttribute and DictionaryStep base class are defined in SuperOffice.CD.DSL.dll.

A third-party is expected to override and implement at least one of three primary methods:

  1. Structure: tables are created, modified or dropped in the database and model.
  2. ImpFileNames: performs priming data import after Structure is complete.

CustomPriming: performs unique priming data actions or data transformations after ImpFileNames is complete, using direct SQL statements.

 

Below is an example DictionaryStep that overrides the Structure and ImpFileNames.

using SuperOffice.CD.DSL.V1.StepModel;
using System.Collections.Generic;
 
namespace SuperOffice.DevNetCddLib.DictionarySteps
{
    [Dictionary("DevNetChat", 1, ReleaseState.Released)]
    public class ChatRoom1 : DictionaryStep
    {
        public override void Structure()
        {
            CreateTable("DN_ChatRoom", "Contains chatroom settings")
                .TableProperties.Replication(ReplicationFlags.Down | ReplicationFlags.Up | ReplicationFlags.Prototype)
                .TableProperties.CodeGeneration(MDOFlags.None, HDBFlags.None, UdefFlags.None, SentryFlags.None)
                .AddString("Name", "Name of the chatroom", 75, true)
                .AddString("Topic", "Description of room content", 255, false)
                .AddEnum<DNRoomStatus>("RoomStatus", "Determines if chat room is open or closed")
                ;
        }
 
        /// <summary>
        /// Return the hard-coded list of standard IMP files for a new 8.1 installation;
        /// </summary>
        /// <returns></returns>
        public override List<string> ImpFileNames()
        {
            // these are the .IMP files
            return new List<string>
            {
                @"I_ChatRoom.imp",
            };
        }
    }
 
    [DbEnum("Value for field 'RoomStatus' in table 'DN_ChatRoom'.", Layer.Core)]
    public enum DNRoomStatus : short
    {
        [DbEnumMember("Set chat room status to closed.")]
        Closed = 0,
 
        [DbEnumMember("Set chat room status to open")]
        Open = 1,
    }
}

 

Only implement the methods that have actual content, i.e. do not create empty overrides as that leads to degraded performance during upgrades.

The DictionaryStep is conceptually a pipeline to:

  • Perform schema changes.
  • Add priming data.
  • Transform table data.

 

While none of the methods are required, each routine presents an opportunity to make database changes. Whether physical schema changes, priming data-related, or simply data transformation in the database, actions done in the pipeline are a means to ensure an agile and evolutionary database design.

Implementation Overview

Structure is where all database table, field and index changes must be made. There are several intuitive methods in the base class to make such actions easy, and are explored more thoroughly in the next section.

Once a table is created, and populating the table with priming data is desired, the class need only implement ImpFieldNames. ImpFieldNames returns a list of IMP file names to the Continuous Database library which iterates over the list and begins to import data.

When more flexibility is required, such as calculated fields or transforming data from existing tables, the dictionary step should override the CustomPriming method.

Methods declared in the base class, ExecuteSelect and ExecuteSql, can issue SQL statements to the database to performing queries or perform actions, respectively. However, these methods should be used sparingly. Issuing SQL statements using these methods is covered in a later section.

Finally, it’s important to understand that all schema changes should be done in a small, compact and isolated manner. Changes should be done in such a way that make it easy to understand and manage. Evolutionary database design benefits from small incremental changes.

Once a dictionary step is committed to the database, it is final. It cannot be undone. The only way to change the last dictionary step is to create a new dictionary step that makes yet another change to the database!

Structure Method

Changing the database schema is facilitated by three methods in the base class:

  1. CreateTable
  2. DropTable
  3. ModifyTable

Each method is a fluent interface, and therefore easy-to-read and easy-to-use. This simple example illustrates this point:

public override void Structure()
{
    CreateTable("Movie""Movie table for movie buffs")
        .AddString("Name""English name of the movie"255, notNull: true)
        .AddString("Description""Description of the movie"255, notNull: false)
        .AddBool("Rated""Has this movie been rated?")
        .AddBlob("MoviePreviewImage""Compressed image string", notNull: false)
    ;

    ModifyTable("Movie")
        .AddString("Genres""The style or category of the movie "255)
        .AddInt("Rating""The rating for this movie")
    ;

    ModifyTable("Movie")
        .ModifyField("Description", maxLength: 1024)
    ;

    DropTable("Movie");

}

 

Each fluent method expects a minimal set of required parameters while permitting several named parameters to override field defaults, such as notNull: false.

In addition to fluent methods for tables, there are fluent methods that make is easy to add and modify fields of all supported field types. There are also handy macro-methods to create common field patterns, such as AddRegisteredUpdated(). AddRegisteredUpdated adds five fields: registered, registered_associate_id, updated, updated_associate_id and updatedCount, all with standard settings.

CreateTable Methods (inherits from ITableBuilderBase)

Method

Description

Legacy

Access methods to set legacy parameters - please do not use unless you know what you are doing

TableProperties

Access methods to set table properties that influence the generated code (rather than the database table itself)

 

CodeGeneration

Set code-generation flags for MDO, HDB (table objects), Udef and Sentry functionality

Replication

Set replication/Travel flags; please note that Travel is no longer being actively developed and do not create new products depending on it.

HasVisibleFor

Set that records in this table have access control via corresponding records in VisibleFor

 

See ITableBuilderBase below.

ModifyTable Methods (inherits from ITableBuilderBase)

Method

Description

ModifyField

Modifies the properties of a field definition.

DeleteField

Delete a field. Be careful, this is a drastic thing to do and breaks backwards compatibility, so you need to be sure no old code is going to run against this database. Any indexes that contain this field will be deleted automatically, both in the database model and in the physical database. Physical indexes will be detected and deleted even if they have been made outside of this product, for instance by DBA's and optimizers.

DeleteIndex

Delete the index that spans the given fields, if it exists. Nothing bad happens if no such index exists.

See ITableBuilderBase below.

ITableBuilderBase Methods

Method

Description

AddBool

Add a boolean field; generally represented by a 16-bit integer in the database

AddBlob

Add a BLOB (Binary Large Object) field, up to 1GB in size

AddDateTime

Add a datetime field. An optional parameter is used to indicate UTC

AddDouble

Add a double floating-point field.

AddEnum

Add an enum field, represented in the database by a 16-bit integer. The enum reference causes strongly typed properties in TableInfo and Row objects. See below for declaring enums

AddForeignKey

Add a foreign key field.

AddId

Add a 32-bit integer field, that is to be understood as a row ID. Use AddForeignKey to add foreign keys, and use AddRecordId/AddTableNumber to set up universal relations

AddInt

Add a 32-bit signed integer field.

AddRecordId

Add a Record Number, a foreign key that forms the second half of a universal relation.

AddString

Add a string field. See maxLength parameter for important notes.

AddTableNumber

Add a Table Number, represented by a 16-bit integer in the database. The value is understood to be a table number corresponding to a table as defined in the sequence table; such numbers are constant over the lifetime of a database but can vary between databases. The table number forms one half of a universal relation

AddUShort

Add a 16-bit unsigned integer field.

Index Methods

AddIndex

Add an index over one or more columns. A particular column name list (order matters) can only occur once

AddFulltextIndex

Add a fulltext index over one or more columns. A table can have at most one fulltext index. Later calls to this method simply add fields to any existing fulltext index. Note that fulltext indexing only happens on Sql Server with the feature installed; otherwise no action is taken.

AddUniqueIndex

Add a unique index over one or more columns. A column name list (order matters) can only occur once

Field Methods

SetHash

Set special hash flags for a field; a field can be a member of a row hash, or can be the hash itself

SetPrivacy

Set privacy tags, which can be used to ensure compliance with privacy rules

SetSearch

Set special search flags on a field

Macro Methods

AddMDO

Add four fields that make up the standard start of a list table: name, rank, tooltip, deleted

AddRegisteredUpdated

Add five fields: registered, registered_associate_id, updated, updated_associate_id, updatedCount with all standard options. Used on most tables and populated automatically by NetServer database layer.

 

The ITableRemover interface only contains one method DropTable that accepts the table name of the table to be dropped. This permanently removes the table, so be careful. This is a drastic action that breaks backwards compatibility and you need to make sure no old code runs against this database.

Enumerations

Using the AddEnum method, a field in the database can be connected to an enumeration type in the code. Here it is important to understand that such an enum will exist in two versions.

To define the enum so that it can be used in a dictionary step, it needs to be declared in the same assembly as the dictionary step. The enum itself is decorated using the DbEnumAttribute, and each member is decorated with DbEnumMember. For instance:

[DbEnum("Value for field 'private' in table 'appointment'."Layer.Core)]
public enum AppointmentPrivate : short
{
    [DbEnumMember("This appointment can be read by anyone")]
    Public = 0,

    [DbEnumMember("This appointment can only be read/seen by the owner")]
    PrivateUser = 1,

    [DbEnumMember("This appointment can only be read by members of the owners group")]
    PrivateGroup = 2,
}

 

This enum is what we call the model enum.

Once the enum xyz is declared, it can be used in an AddEnum<xyz>() method call within the Structure() method of a dictionary step. This will cause the creation of a 16-bit integer field in the database, and code generated for the TableInfo and the Row object will have strongly-typed references to it.

However, that generated NetServer code does not reference the enum declared in the dictionary step – it references a generated version of the same enum in the NetServer generated code! The reason for this is simple: the enum in NetServer must be known and available to all NetServer code, whether the dictionary step class is present or not.

In a customer installation, the assembly containing the dictionary step is only present in the DevNet Toolbox while the database is being upgraded and is never required to be in the NetServer installations. Thus, a new NetServer-native enum is generated together with the TableInfo and Row code. This one is the NetServer enum.

While the model-enum lives in the same assembly as the dictionary steps, it is not a member of any particular step and is generally declared directly in the namespace. If it needs to be changed (for instance, adding new values), then just change it, generate new NetServer code from it and release that.

Changing an enum does not need a dictionary step, and is in fact not connected to a step. The only connection between steps and enums happens when an AddEnum<> method call declares a new database field to be of an enum kind.

Structure Implementation

Under the covers, a call to CreateTable, DropTable or ModifyTable generates a BuildCommand, which is an objectified representation of the actions they define. For example, CreateTable creates a BuilderAddTableCommand instance, DropTable creates a BuilderRemoveTableCommand instance and ModifyTable creates a BuilderModifyTableCommand instance.

BuildCommands are applied towards the database after the Structure method is called.

ImpFileNames Method

IMP files are a means to populate a table with priming data once the DictionaryStep has completed performing any schema changes defined the Structure method. The sole purpose of ImpFileNames() is to return a list of file names containing data that Continuous Database will discover and import.

public override List<string> ImpFileNames()
{
    return new List<string>
     {
         @"abc.imp"
     };
}

 

Dictionary steps can provide IMP files is one of two ways:

  1. As a physical file located in the same location as the executing assembly.
  2. As an embedded resource in the same assembly as the DictionaryStep.

If using the physical file means, understand the resource resolver will begin in the executing directory, and then look for a folder called ImpFiles. It expects to find your imp files inside this folder. That said, Embedded resource files are the preferred method, and must be structured in the following way. 

 

To ensure your IMP files are embedded as a resource select the file to display the IMP file properties, then set the Build Action property to Embedded Resource.

 

When compiled, the embedded resource file is structure in the following format:

ProjectName.ImpFiles.DictionaryName._StepNumber.Files.Filename.imp

In a de-compiler the embedded resource will appear in the assembly Resources folder.

 

While embedded resource files are the preferred way to supply priming data, it’s also possible to discover IMP files that exist in the current directory from which the application applying the step resides. The advantage of embedded resources is easy deployment: there is only one file, the compiled assembly, that is needed; and thus the chance of deployment failures due to missing IMP files is eliminated.

Warning:

If you are using Visual Basic, or some .NET language other than C#, know that the compiler might not embed resources the same as shown above. We know that the Visual Compiler ignores the folders when naming the embedded resource.

Below is an image of a Visual Basic project with an embedded resource with the same folder structure as shown above.

Visual Basic Project folder structure:

 

When compiled, the Visual Basic compiler writes the resources differently than the C# compiler.

Compiled Visual Basic Resources:

 

To solve this, if you are using Visual Basic projects to manage your dictionary steps, you must change the filename so that it is written in such a way that Continuous Database can discover it.

By changing the name of the file, regardless of the folder structure, the embedded resource will be named accordingly.

 

As seen in the following image, the folder structure didn’t matter and the filename matches that which the Continuous Database resource resolver is able to find.

 

As stated earlier, we believe embedded resources are the best way to manage your import files.

Internationalization and Localization

Tables can be populated with language specific data from IMP files that are in language specific folders.

The folder structure must be the same as before, where IMP files are placed in a folder called Files, only now there must be a language-code named subfolder where the preferred IMP files reside.

The language specific file structure becomes:

ProjectName.ImpFiles.DictionaryName._StepNumber.Files.LangCode.Filename.imp

 

Looking at the image with language folders, you can see two language specific folders with two IMP files in each, one for Danish and one for US English.

During execution, the pipeline determines which file is used based on which language was selected during the installation of SuperOffice. During an installation, or upgrade to SuperOffice 8.1, SuperOffice inserts what the preferred language is as a record in the ProductVersion table.

ProductionVersion         ownerName      codeName                  version
productVersion_id          SuperOffice       databaselanguage     US

With this information, the underlying database management routine knows from which folder the correct language specific data is used.

When ImpFileNames returns a list of IMP filenames, the preferred language is used to first search for a folder with a name that matches, and contains a file with that name. Files with a matching name in the language folder take precedence. Therefore, if an IMP file with the same name resides in both the language folder and root Files folder, only the one that resides in the language folder is used.

Imp File Format

Imp files are tab-delimited data files used to populate tables with priming data. There are several configuration options that partners can leverage to control priming data in their applications. These files are conceptually broken into two components: a header section and a data section.

Line 1: ;This is a comment line, describe your table, intentions, etc.
Line 2: [UnoRoom]
Line 3: Truncate_Table
Line 4: ;room_id nam creatr Registered regby updated updatedby updated count
Line 5:	0	Room1	0	0	0	0	0	0	0
Line 6:	0	Room2	0	0	0	0	0	0	0
Line 7:	0	Room3
Line 8:	0	Room4	0	0	0	0	0	0	0
Line 9:	0	Room5

 

Header Section

The header section contains the table element and optional functions that perform operations such as truncate a table or field. Header elements you are likely to see in IMP files are:

Header Item                                 

Function Description

[TABLENAME]

Table declaration before all functions and data lines.

TRUNCATE_TABLE

Removes all rows from a table.

TRUNCATE_FIELD

Removes specific rows from a table.

SET_AUTODATEUPDATE_OFF

Turns off setting date-time to fields named ‘registered’.

SET_BUILTIN

Turns on setting fields named “builtin” to 1.

 

Except for a comment line, which is a line that begins with a semi-colon, the table name must always be declared first.

Header elements you are likely to use in your own IMP files are:

  • [TABLENAME]
  • TRUNCATE_TABLE
  • TRUNCATE_FIELD

Other functions in the table above are primarily reserved for default priming data used during SuperOffice installations and upgrades.

Truncate Functions

In cases where you need to truncate an entire table or just certain rows, IMP files supports:

  1. TRUNCATE_TABLE
  2. TRUNCATE_FIELD

Both functions will apply to the previously declared table. For example:

[abc]
Truncate_Table

Truncate_Table will delete all rows in the table named abc. This directly translates into the SQL statement:

   TRUNCATE TABLE 'abc';

When Truncate_Field is used, it only deletes the row where the criteria match. The format is a tab-delimited line in the IMP file that defines the table on a line followed by the function and then the parameters.

[Table]
Truncate_Field \t columnName \t ColumnValue

A demonstration of how that looks is seen in this example:

[abc]
truncate_field xyz    2

This translates into the following SQL statement:

   DELETE FROM abc WHERE abc.xyz = 2;

Multiple calls to the same function must be specified on a new line.

[Table]
truncate_field columnName  2
truncate_field columnName  10
truncate_field columnName  12

Other truncate functions used exclusively used by SuperOffice.

TRUNCATE_BUILTIN

        DELETE FROM TableName WHERE TableName.isBuiltIn = 1

TRUNCATE_BUILTIN_FIELD

        DELETE FROM TableName
        WHERE TableName.FieldName = ColumnValue AND TableName.isBuiltIn = 1

Data Section

Below is a simple IMP file that contains a table named UnoRoom. The first line is a comment, followed by the table declaration in square brackets. The truncate_table function on Line 3 is an instruction to the priming engine to truncate the existing table prior to importing the following data. Line 4 is another comment line that describes the table structure. Lines 5 through 9 are the actual row data that is imported into the UnoRoom table.

The first column of row data lines beginning with 0 instruct the priming engine to automatically generate the sequence id values and insert them.

Columns called registered are by default auto-populated with the current datetime, however this can be switched off with the SET_AUTODATEUPDATE_OFF function.

Line 1: ;This is a comment line, describe your table, intentions, etc.
Line 2: [UnoRoom]
Line 3: Truncate_Table
Line 4: ;room_id nam creatr Registered regby updated updatedby updated count
Line 5:	0	Room1	0	0	0	0	0	0	0
Line 6:	0	Room2	0	0	0	0	0	0	0
Line 7:	0	Room3
Line 8:	0	Room4	0	0	0	0	0	0	0
Line 9:	0	Room5

While you can let the priming engine automatically assign id values, there may be cases where it’s preferred to hardcode the id values instead. In that case, you could simple type the desired id values directly inline. The id values do not have to be in an ordered sequence.

Also, notice how Line 7 and 9 contain tab-delimited null values. This is completely legal and the priming engine will insert default values based on the field data type. 

Line 1: ;This is a comment line, describe your table, intentions, etc.
Line 2: [UnoRoom]
Line 3: ;room_id nm creatr Registered regby updated  updatedby updated count
Line 41	Room1	0	0	0	0	0	0	0
Line 52	Room2	0	0	0	0	0	0	0
Line 63	Room3	0	0	0	0	0	0	0
Line 74	Room4	0	0	0	0	0	0	0
Line 85	Room5	0	0	0	0	0	0	0

In the simple example above, the column data will be imported into the UnoRoom table, and the rows are assigned the id values defined inline. This is useful when you need to reference these rows by id in other Imp files. Below is an example that does just that – it hard codes the id values defined in the room Imp file above. 

Line 1: [UnoGroup]
Line 2: ;group_id nm rm_id Registered regby updated updatedby updated count
Line 30	Grp1	1	0	0	0	0	0	0
Line 40	Grp2	1	0	0	0	0	0	0
Line 50	Grp3	2	0	0	0	0	0	0
Line 60	Grp4	2	0	0	0	0	0	0
Line 70	Grp5	3	0	0	0	0	0	0

Variables

There are additional options for handling id referencing cases using variables declared with the pound symbol (#).

The following IMP files declare three tables: unogroup, unoroom and a relations table called unogrouprooms. Both the group and room table use variables in place of assigned id values, and then the  grouprooms table uses the variables to populate the table with their actual values. 

Line 1: [UnoGroup]
Line 2: ;group_id name Registered regby  updated updatedby updated count
Line 3#GRP1	Grp1	0	0	0	0	0	0
Line 4#GRP2	Grp2	0	0	0	0	0	0
Line 5#GRP3	Grp3	0	0	0	0	0	0
Line 6#GRP4	Grp4	0	0	0	0	0	0
Line 7#GRP5	Grp5	0	0	0	0	0	0
 
        Line 1: [UnoRoom]
Line 2: ;rm_id nm  creatr Registered regby updated  updatedby updated count
Line 3#RM1	Room1	0	0	0	0	0	0	0
Line 4#RM2	Room2	0	0	0	0	0	0	0
Line 5#RM3	Room3	0	0	0	0	0	0	0
Line 6#RM4	Room4	0	0	0	0	0	0	0
Line 7#RM5	Room5	0	0	0	0	0	0	0
 
        Line 1: [UnoGroupRooms]
Line 2: ;grouproom_id group_id room_id
Line 30	#GRP1	#RM1	
Line 40	#GRP2	#RM2
Line 50	#GRP3	#RM3
Line 60	#GRP4	#RM4
Line 70	#GRP5	#RM5

It’s important to note that variables must be declared and resolved before they can be referenced. While primarily for referencing primary keys, they can also be used to reference foreign key columns of type int, short and long.

CustomPriming Method

CustomPriming is the third and final method executed during the DictionaryStep pipeline, and is used to make data transformations that are not otherwise supported. In order to support complex data transformation, the base class exposes two helper methods to perform SQL actions towards the database: ExecuteSql and ExecuteSelect.

ExecuteSql Method

While you really should use Imp files for the bulk of priming data, CustomPriming is used for circumstances when you need to perform raw SQL towards existing tables. In those cases, ExecuteSql is there to help execute non-select SQL statements.

Let’s begin with a simple Insert statement example.

In the following code, two datetime variables are declared for use as parameters to the Insert statement. Then the ExecuteSql method is invoked with two parameters: the SQL statement performing the INSERT, and then the anonymous type containing the parameter values.

public override void CustomPriming()
{
    var utcNow = DateTime.UtcNow;
    var never = DateTime.MinValue;

    ExecuteSql(@"INSERT INTO { abc} (
        { abc.abc_id},
        { abc.xyz},
        { abc.registered},
        { abc.registered_associate_id},
        { abc.updated},
        { abc.updated_associate_id},
        { abc.updatedCount}
        ) VALUES (
        { @abc_id},
        { @xyz},
        { @registered},
        { @registered_associate_id},
        { @updated},
        { @updated_associate_id},
        { @updatedCount}
        )",
        new
        {
            abc_id = 1,
            xyz = "A String",
            registered = utcNow,
            registered_associate_id = 0,
            updated = never,
            updated_associate_id = 0,
            updatedCount = 0
        });
}

 

There are three representations of curly braces used to define the table, fields and values:

  1. {abc} means “table abc”;
  2. {abc.xyz} means “field xyz of table abc”,
  3. {@foobar} means “parameterized value foobar”, and named the same as the property in the anonymous parameter class.

Continuous Database will look up the table and field names, do quoting and case conversion and anything else that might be needed to make valid SQL, and then properly encode the parameters.

Please always use parameterized values. Doing so avoids two major hazards: formatting problems (particularly dates!) and the possibility of sql injection. As a rule a dictionary step should never depend on externally supplied values, but even an update from one field to another that does not use parameterization could still expose us to sql injection from values latent in the database. There is no excuse for sql injection caused by lack of parameterization.

A handy helper method GetNextId(‘tableName’) is useful when you want to get a tables next id value from the sequence table. 

public override void CustomPriming()
{
    var utcNow = DateTime.UtcNow;
    var never = DateTime.MinValue;

    var nextIdValue = GetNextId("abc");

    ExecuteSql(@"INSERT INTO { abc} (
        { abc.abc_id},
        { abc.xyz},
        { abc.registered},
        { abc.registered_associate_id},
        { abc.updated},
        { abc.updated_associate_id},
        { abc.updatedCount}
        ) VALUES (
        { @id},
        { @xyz},
        { @registered},
        0,
        { @updated},
        0,
        0
        )",
        new
        {
            id = nextIdValue,
            xyz = "A String",
            registered = utcNow,
            updated = never
        });
}

 

Another useful trick is to use $nextId to automatically obtain the next id value from the sequence table.

public override void CustomPriming()
{
    var utcNow = DateTime.UtcNow;
    var never = DateTime.MinValue;

    ExecuteSql(@"INSERT INTO {abc} (
        { abc.abc_id },
        { abc.xyz },
        { abc.registered },
        { abc.registered_associate_id },
        { abc.updated },
        { abc.updated_associate_id },
        { abc.updatedCount }
        ) VALUES (
        { abc.$nextId },
        { @xyz },
        { @registered },
        0,
        { @updated },
        0,
        0
        )",
        new
        {
            xyz = "A String",
            registered = utcNow,
            updated = never
        });
}

 

The following example demonstrates how to use explicit types as parameters. This is a convenient way to bundle all the parameters in a predefined way that can be used by multiple DictionarySteps.

The explicit type can use public or private fields or properties that map to parameters. Casing of the fields or properties is case-sensitive.

ExplicitParameters is a class that contains five fields:

private class ExplicitParams
{
    internal int id;
    string xyz = "A string";
    int zero = 0;
    DateTime utcNow = DateTime.UtcNow;
    DateTime never = DateTime.MinValue;
}

 

All fields in this example are prepopulated except id. In that case, the field is populated in the constructor of the class as seen in the following example.

public override void CustomPriming()
{
    var nextIdValue = GetNextId("abc");

    ExecuteSql(@"INSERT INTO { abc } (
        { abc.abc_id },
        { abc.xyz },
        { abc.registered },
        { abc.registered_associate_id },
        { abc.updated },
        { abc.updated_associate_id },
        { abc.updatedCount }
        ) VALUES (
        { @id },
        { @xyz },
        { @utcNow },
        0,
        { @never },
        0,
        0
        )",
        new ExplicitParams
        {
            id = nextIdValue
        });
}

 

Remember that a DictionaryStep assembly should be self-contained, with no external referenced dependencies. Therefore, do not place explicit type for SQL parameters in external libraries.

One more common scenario is when new columns are introduced and the pre-existing data must be migrated or transformed in some way.

The following code snippet is an example of a new field added to an existing table. Then the CustomPriming method executes an UPDATE statement that transfers the data from the old field into the new field and sets the updated field.

public override void Structure()
{
    ModifyTable("abc")
        .AddString("def""Descrition"100, notNull: false)
    ;
}

public override void CustomPriming()
{
    ExecuteSql(@"UPDATE {abc}
	    SET {abc.def} = {abc.xyz}, 
 		{unogroup.updated } = {@now}",
            new { now = DateTime.Now });
}

 

ExecuteSql is a great way to migrate data when the data is known. However, for scenarios when you don’t know the data, or the needed data is in the database, you use ExecuteSelect.

ExecuteSelect Method

When there is data in the database that needs to be obtained during the DictionaryStep pipeline, ExecuteSelect is there to execute a query and return the result in a DataTable. ExecuteSelect accepts two parameters: the SQL statement to execute, and optional parameters. It returns a standard DataTable object that is disconnected from the database.

A useful place for ExecuteSelect is in the ImpFileNames method, to first check if priming data exists, and potentially only set it by returning the name of the imp file if the tables doesn’t contain any data.

public override List<string> ImpFileNames()
{
    var abcData = ExecuteSelect(@"SELECT {abc.abc_id} FROM {abc}");

    if (abcData.Rows.Count == 0)
    {
        return new List<string>
        {
            @"abc.imp"
        };
    }
    else
        return new List<string>();
}

 

The formatting of the SQL statements must use the same structure as the ExecuteSql method. When query criteria parameters are needed, use the second parameter to pass in an explicit or anonymous type with the fields or properties that contain the values.

var sql = @"SELECT {abc.abc_id} FROM {abc} WHERE {abc.abc_id} = {@criteria}";
var abcData = ExecuteSelect(sql, new { criteria = 123 });

 

That’s all there is to it!

Tooling

SuperOffice DevNet provides tooling to help create a DictionaryStep from an existing third-party table. Please refer to the DevNet Toolbox for more information.

Conclusion

We have covered the conceptual overview of what SuperOffice Continuous Database is, what DictionarySteps are and how they are used, as well as diving down into all the methods available in the DictionaryStep pipeline and base class.

Evolutionary database design is the way forward. The implementation development, tested and used by SuperOffice in its’ own environment has proven to be very useful and a powerful tool for ensuring database integrity, flexibility and longevity.

We believe that once you begin to leverage it, you will be impressed and assured that SuperOffice Continuous Database is the correct decision and direction for living in an agile world.

Posta kommentar Överst på sidan