Using request statuses to measure...
If you want to get some feedback from SuperOffice Customer Service on how efficient you are as an organization (or how slow your customers or partners are), you can start using request statuses. A request status basically identifies in what workflow status a request is, a bit like a project status (planned, in progress, done, etc). Out of the box, we have the default ones: Open, Postponed and Closed, however under Settings > Request status you can add your own. Let’s say you are a retailer of some kind of product. Then perhaps it would make sense to add a few more statuses: “Waiting for customer” and “Waiting for supplier”. These statuses are used to identify when a request is waiting for the customer (for instance to get back to you with an invoice number), and waiting for a supplier (for instance to give you some detailed product info you don’t know). Both of these new statuses should have system status Open.
Now, when you are working with requests, you need to use these statuses. So, when you reply to a customer and ask for that invoice number, make sure you change status to “Waiting for customer”. This will be useful for your everyday work as well: in the list of requests, you will easily spot that this request is pending customer feedback and can be skipped for now. Similarly, whenever you forward a question to your supplier, change status to “Waiting for supplier”. The good thing is that whenever the customer or supplier emails you back, the request will automatically switch back to the default request status: Open. And like this, the status of the request will flip back and forth, at any time indicating where the responsibility of the request is.
The request log
All of these status changes are of course logged in the request log, and so we can use this to report on how long the request has been in the various statuses. This is starting to become interesting, because it allows us to find out what is the bottle neck of our support. What is the reason for why we spend on average 40 hours to answer each request? Is it because we are slow, because our customers are slow when responding with information we need, or because our suppliers are slow?
If you look at the request log for a request where you have changed statuses back and forth a bit, you should be able to find the rows which identify status change. In fact, in the log you will find any change to a request, which makes it a lot of data to look through. Let me first explain the data structure of the log. Whenever you work with a request, such as adding a message, we create a row in the table ticket_log_action. Furthermore, we create multiple rows in ticket_log_change, one for each request attribute which changes (with from and to values). The field log_change is an enum, indicating what we changed. The structure looks like this:
The timestamp is on ticket_log_action, so in order to do our statistics, we need to look at all rows from ticket_log_change left joined with ticket_log_action, filter on the ticket_log_change we’re interested in (change status = 35), and see what values we are changing to and from. This illustration shows a simplified view of how a subset of this table will look:
What we want to measure here is the time difference between the 1. and the 2. row, and count that as time spent in “Waiting for customer”, the time between the 2. and the 3. row and count that as time spent in “Open”, etc. Note also that we need to count the time between the last line in the log and “now”: if the request was changed to “Open” 40 hours ago, and it’s still there, that should be counted. Normally though, you probably want to do this kind of reporting on requests which are closed.
Using a query and ejScript, it’s pretty easy to traverse this log and count up the time in the different statuses. However, this is rather time consuming, and requires a separate query. Let’s say you want to report on this for a large number of requests, then you will sit around and wait for that report some time. And of course, drill down or slicing and dicing of those numbers will require recalculation of each request each time. Given that we probably would like to report on mostly closed requests, which does not change any more, this seems quite inefficient. So my suggestion is to improve this in a couple of ways.
In the old days of 3D-programming, we always precalculated sinus and cosinus tables, because the functions were too slow. A datawarehouse is all about precalucating date for reporting purposes. And that’s my suggestion in this case as well. We will first calculate the time spent in the various statuses for a single request, and store those values in a couple of extra fields on the request. Since a closed request rarely changes, we can just do this asynchronously whenever a request has changed (for instance, every night). Then we can do reporting by simply using these precalculated values. Reporting the average time spent in the different statuses is then quite simple.
I will suggest another small improvement to this solution. Instead of having extra fields on the Request table, I will use a related extra table (“y_ticket_statistics”). This will save me from having to “drag around” those extra fields when working on a request, having them updated every time we update a request, etc. It’s a small improvement, but if you keep adding fields to the Request entity, it will eventually slow down the system. So, my final table design will look something like this:
This is essentially a one-to-one relationship. Every row in my ticket table (Request) will have a single related row in y_ticket_statistics after my script has executed.
The calculation script
Let’s look at the ejScript code for this. I am using a Map to summarize the time spent in the various statuses, because it has a very convenient function: increaseValueForKey(). I am using the status ID as my key.
Void updateStatistics(Map p_timePerStatus, Integer p_ticketId)
ExtraTable ticketStatistics = getExtraTable("y_ticket_statistics");
if (ticketStatistics.loadFromAgentAndKey(1, p_ticketId.toString()) == false)
Void calculateStatusForRequest(Integer p_ticketId)
Bool firstRow = true;
se.addCriteria("ticket_log_change.log_change", "OperatorEquals", "35"); // 35 = change status
se.addCriteria("ticket_log_change.ticket_id", "OperatorEquals", p_ticketId.toString());
for (se.execute(); !se.eof(); se.next())
to = DateTime(se.getField(0));
fromValue = se.getField(1);
toValue = se.getField(2);
if (firstRow == false)
from = to;
firstRow = false;
// Loop requests which where changed since yesterday morning
se.addCriteria("ticket.last_changed", "OperatorGte", getCurrentDateTime().moveToDayStart().addDay(-1).toString());
for (se.execute(); !se.eof(); se.next())
Integer ticketId = se.getField(0).toInteger();
print("Processing request " + ticketId.toString() + ": '" + se.getField(1) + "'\r\n");
This script finds all requests which were changed yesterday, and calculates the time it was in different statuses. The values for the various statuses (Open = 1, Waiting for Customer = 6, Waiting for supplier = 7) are stored in the table y_ticket_statistics. I am using a small tricks here: the ExtraTable class has a function called findFromAgentAndKey. By saving my entries with the ticketId as the dbi_key and a fake agent_id of 1, I can very easily lookup these rows again if they exists. This is done so that if we recalculate for a request, we will update the corresponding row in y_ticket_statistics instead of inserting a new row (which would result in duplicate rows). The function .save() on ExtraTable will see if we are working on an existing row or not, and choose between calling insert or update.
If you register this script as a scheduled task, you should start getting content in your y_ticket_statistics table.
Using the standard table view of CS you can already now get some statistics, such as average time spent in statuses pr. category, month, or whatever. It is also quite simple to create charts based on this data (see my earlier blog on how to chart data). This allows us to do some quite interesting analysis, such as this simple chart which shows how time spent has evolved over time. A quick look at it tells us that we’re doing quite ok, but our suppliers need to shape up!