Change of ticket log system

Michel Krohn-Dale 3 Oct 2022

In Service, changes to a ticket are logged. This is a design that has been there since day one. However, the logging system has evolved, and we are now announcing a new change to this system. If you have customizations that are using the ticket log for any particular purpose, there is a possibility they will have to be updated.

Logging 1.0

The first logging system used is in the table ticket_log. This log contains one row for each changed attribute of a ticket, but it only contains the new value. Also, changes to extra fields on the ticket are not logged. There is no clustering of changes into a single "operation", they are all just sequential changes and you must look at the datetime to see which ones belong together.

This system is still written to, but it is not visible in the user interface anywhere. The "View ticket log" screen is using the newer log systems.

Logging 2.0

The second logging system was probably introduced around 2003, so it has been there "since forever". This system improves on the first version by logging both the previous and the new value of a property. Furthermore, multiple property changes that belong to the same "operation" are clustered together. 

This logging system is using the ticket_log_action and ticket_log_change tables. A single operation (such as replying to a ticket) is stored as one row in ticket_log_action, and then all property changes that happened in that operation are stored as child rows in ticket_log_change. Each row contains information about which property was changed, the old value and the new value. All properties on a ticket, including extra fields, are included.

Logging 3.0

The problem with the second logging system is that it eats database rows for breakfast. A rough estimate could be that a single ticket has perhaps 5 messages and then perhaps 50-100 rows in ticket_log_change. There are some customers using Service that handle e.g. 5 millions of tickets per year. This becomes 500 millions of ticket_log_change rows per year, and suddenly the database limit of apx. 2 billions rows (signed 32 bit integer as primary key) becomes a problem. As a consequence of this, we designed the third log system which essentially is the same as v2.0, except that all the ticket_log_change data is stored in a JSON structure in the ticket_log_action row (column named "details") instead of separate child rows. As a consequence, we will not overflow the ticket_log_change table. Since the ticket log is shown grouped around an action anyway, this clustering of changes into one single data structure makes sense.

We are never converting old ticket_row_change rows into JSON structures. That means that a database will have a combination of ticket logs that are version 2.0 and 3.0. And for a single ticket there could be a mix of the two. When we show the ticket log, we are supporting both formats. Any integration or customization that is accessing the ticket log needs to do the same. Here is a list of what you need to do if you have such a solution:

  1. Any query towards the ticket_log_change table may return an incomplete result, since some ticket_log_action entries may have the JSON instead of child records.
  2. Any criteria towards ticket_log_change (such as looking for a specific change) will obviously not work either.
  3. Any solution where you traverse the log to check e.g. how long a ticket has been in a given status, will need to load both the JSON entries and the ticket_log_change rows into memory, and then traverse the data there. 

The data structure of the JSON is exactly the same as in the table:

{
  "changes": [
    {
      "messageId": 2073,
      "extraFieldId": 193,
      "logChange": 33,
      "to": "",
      "from": "0"
    }
  ]
}

There are multiple ways of handling this, but a suggestion is to use a Map of JSON structures with the ticket_log_action.id as the key. Then all the JSON details rows can be inserted directly into the Map, and the action_log_change rows can be used to construct JSONs and inserted. Afterwards, you can easily iterate the Map using .first()/.next() and check the complete log sequentially.

Hopefully this will give you an early warning and chance to adjust before we move over to Logging 3.0 as a default behavior for all. If you wish to start experimenting with this system it is currently controlled by a registry setting (357), setting this to value = 1 will enable it.

Please comment with you have any incompatible solutions out there.