Agenda view

Today’s release is an example of the continuous improvement we’ve been doing in the agileBase platform over the past 10 or so years!

In this case, we’ve added a new view to the built in calendar – a weekly agenda. You’ll see the new button at the right of the calendar, next to ‘day grid’.

agenda view

It’ll show all events in the selected calendars in the current week. To switch weeks, use the arrows at the top right.

Please let us know what you think and if there are any tweaks you’d like to see.

Remember, calendar events can also be sync’d to and from an external calendar, like Google, Microsoft Office or Apple – if you’re interested in getting that set up please get in touch. There’s a small charge per user to cover the third party costs necessary.


PostgreSQL performance improvements

As we mentioned last week we’ve upgraded some of the server software the agileBase platform uses. The most significant upgrade is a move from PostgreSQL version 9.5 to version 10. That’s the database that stores all of agileBase’s data.

Now that it’s been running for more than a week, we can see whether there are any differences in performance. It’s common to see benchmarks which are designed to replicate real-world scenarios, but not so common to see actual speed changes for live production systems.

So here’s a simple before and after comparison for our own system. The method we used is to look at the top 30 queries from before the upgrade which were taking the most time in total over a 7 day period. That’s not necessarily the slowest queries, we’re measuring the total cumulative time spent running a particular query over those 7 days. So many of them will be the slowest queries but some will be faster ones which are just used very frequently. We then compared the average times with those for the same queries in a 7 day period after the upgrade.

The takeaway is that for us, SQL querying has become significantly faster, in fact the top 30 most time consuming queries are faster by an average of 35%. Of course this can’t be generalised to anyone other than us and even then the nature of a live system means multiple factors could be at play – see below!

To find these stats we used the JavaMelody monitoring package.


Please note that this analysis is only meant to inform us internally, it’s not necessarily rigorous enough to draw general conclusions from. For example, the system may be under different usage patterns from one week to the next (different users going on holiday etc.). It’s also possible the data in the system has changed – for example if someone’s done a large data import. No doubt there are many other possible differences. One advantage of an artificial benchmark is that it’s more repeatable!

Another thought – if we wanted to spend more time on it, we could also compare the top 30 most time consuming on the new with the old, which may be a different set of queries.  We’ve done some spot checks to confirm that many of the top 30 queries do overlap though.

With those things in mind, here are some general stats and the results.

Overview stats

Just to give some background, some general statistics from the first 7 day period are:

  • there were 2,769 unique SQL queries (counting the same query with different parameters as one query, i.e. counting the ‘prepared statement’ versions)
  • there were 7,052,363 calls to those queries in total
  • the average time to process was just 5ms (five thousandths of a second) for 98% of those calls
  • the slowest 1% of calls took on average 302ms each

The stats for the second 7 day period were very similar apart from one big difference -there were about 3 million calls less in total. That’s partly because some external apps were previously calling the system APIs in bursts every 5 minutes 24hrs a day (reduced a couple of days before the upgrade) and also because the need for another type of internal query was reduced due to a cache implementation. It’s possible this could have had an effect on results but I think a material change is unlikely due to the fact the server isn’t CPU-core-bound (there are plenty of unused cores most of the time), backed up by the fact the other average time stats remain the same. Furthermore, some of the query times were manually spot checked before the upgrade at times of low activity and remained similar. It’s impossible to know for certain though.


Query no. Old PG9.5 (ms) New PG10 (ms) Speedup* Query description
1 62 31 1.0 retrieve comments for a field
2 898 768 0.2 retrieve a filtered list of sales jobs
3 382 371 0.0 retrieve the newest customer leads
4 230 193 0.2 retrieve a filtered list of trade jobs
5 255 154 0.7 retrieve contact details for a lead
6 794 555 0.4 search the list of allowed products for an order
7 324 232 0.4 find duplicate orders for a customer on the same day
8 53 40 0.3 retrieve sales contacts for a company
9 223 234 0.0 retrieve the account info for a lead
10 2234 1694 0.3 total up the gluten values in all recipes
11 75 48 0.6 retrieve the list of lead activities requiring an email to be sent
12 204 202 0.0 retrieve a filtered list of active trade jobs
13 614 545 0.1 retrieve a filtered list of current sales jobs
14 605 414 0.5 retrieve a filtered list of all leads
15 56 21 1.7 search the products list
16 673 557 0.2 retrieve the list of order lines to lock
17 91 71 0.3 calculate finance costs for a job
18 127 127 0.0 calculate the message for a product sample
19 817 667 0.2 retrieve a filtered list of all jobs
20 119 120 0.0 retrieve a filtered list of all calls
21 19 15 0.3 retrieve customer account info for a sale
22 208 347 -0.4 total up the quantities in a recipe
23 203 172 0.2 create activities for expired memberships
24 202 173 0.2 create activities for 30 day renewals
25 24 20 0.2 retrieve a filtered list of accounts
26 1100 594 0.9 retrieve the newest wholesale customer orders
27 730 298 1.4 get the audit trail change history for a record
28 13789 11122 0.2 refresh the materialized view of confirmed invoices
29 320 267 0.2 retrieve a particular user’s list of filtered leads
30 1361 959 0.4 retrieve a list of invoices to auto-generate

*The speedup column is calculated as the ratio of the old over the new, minus 1. Hence a speedup of 0 means the query performs exactly the same after the upgrade. 1 means it’s twice as fast, 2 means it’s three times as fast. A negative figure means it’s slower, as you can see there are one or two which are slower, but the vast majority are faster, often significantly so.

Here the query descriptions are only vague and massively simplified explanations of a particular query. Many of these are the slower queries and are typically very complex, with lots of joins to multiple layers of views, aggregate calculations etc. A couple are simple but just called very frequently. Some may work on many millions of records, some a few thousand – there’s a good mixture!

Charting the actual average query times before and after didn’t work very well because of the massive difference of speed between some queries. A log scale might have helped but that would make before and after comparisons less readable.

Instead here’s a chart of the speedup factors (x-axis, see above for definition) for each query after the upgrade. The y-axis is just the query number from the table above.


The average query speedup is 0.35, i.e. queries are 35% faster after the upgrade. That’s a pretty significant speedup, all without a single hardware change. Thanks to the PostgreSQL project team for continuing to work hard on performance enhancements as well as many other areas – it’s a vibrant community. These improvements are certainly appreciated.

Configuration notes

You may be wondering about configuration – as we say, no hardware was changed. This machine has 64GB RAM and 16 CPU cores. The only difference in PostgreSQL configuration was the addition of three new parameters for the newer software to allow the system to take advantage of multiple cores while processing a single query, i.e.


It’s possible that future configuration changes, e.g. allowing greater parallelism could improve things even further.


This weekend we completed a second set of upgrades to our systems, which should result in faster performance able to cope with growing customer data. A couple of weeks ago our hardware was upgraded to add more processor cores, memory and space and now the software stack has had a full refresh, importantly including moving to the latest version of the PostgreSQL database. That brings with it a whole host of speedups, so we look forward with anticipation to performing some before and after comparisons.

But never mind speeding up your data processing, we’ve also produced some gorgeous new summer login screens.

Enjoy a colourful summer!

summer login

Coloured calculations

A quick update on a feature upgrade this week:

A while ago we introduced a traffic lights feature to colour fields in certain views based on their content, so e.g. you could colour ‘accepted’ or ‘rejected’ green and red.

We’ve now built onto this to let you do the same with calculations in a view, as well as standard fields. Here’s an example showing a pass/fail calculation:


Thanks to Lewis Pies for suggesting this update.

Setting up

Setting this up is really simple – when editing a calculation, just tick ‘colour words’. When the calculation results in any of the words listed, they’ll be coloured as shown. If you’d like to add your own words, please contact us.

colour words setup

Hosting speed increases

linode-logo_standard_light_mediumThe nice folks at Linode, our hosting provider, have decided to give us (as well as their other customers) some additional capacity, on each of our servers!

Our largest server will get an additional 16GB of RAM (memory) and an additional 4 CPU cores. That’s a significant update that should make the system speedier for all.

We’re keen to take advantage of this, but it will need a short period of downtime to perform the upgrade, possibly a couple of hours. We’re provisionally scheduling that for this Sunday (27th May) at midnight. If there are any issues at all with that though, please email and we’ll delay until a time suitable for all.


Introducing the chaser

We’ve been talking a lot about agileBase’s ‘chaser’ feature recently, giving examples of what it can do, but we’re aware not everyone may know precisely what it is!

It was developed for a particular purpose, but has turned into one of the most promising features generally that we’ve released over the past few of years. It has the potential to help people be more effective in a wide range of jobs, wherever they regularly need to ‘chase’ others for information.

Customers who’ve been trying it out say that it makes staff more effective at their jobs by ensuring documents and information required from third parties (such as suppliers) is obtained when needed, without requiring lots of manual checking, emailing and phoning.

In a nutshell, the system comprises three parts

1) checking when something’s due and emailing the person who needs to supply it

2) allowing the recipient to easily provide the data by following a link in the email to an online form. Documents can be uploaded and other information filled in

3) if no response is received, re-sending a number of times, finally escalating to someone else (usually someone internal)



The first uses have been in our agile:SA Supplier Approval software. We have a few chasers, for example

  • non-conformances: whenever a non-conformance is raised against a supplier, whether as a result of an audit or ad hoc, the supplier will be chased if they don’t respond in a timely manner. Likewise, your staff can be chased if an internal non-conformance is not resolved
  • ingredient risk assessment: a member of staff must risk assess new ingredients
  • ingredient specification: a supplier will be chased for any missing specification documents
  • supplier audit questionnaires (SAQs): suppliers must respond to these and will be chased if they don’t

There are many more but hopefully you get the idea. The same principles can be applied whenever you need a third party to supply you with information or documents of a known type.

Technical setup

As an agileBase administrator, if you want to set up your own chaser, here’s how to do so. The process has been made as simple as possible so setup should be a breeze.

1) Create a view with a list of people (email addresses) to chase. In the view’s manage tab, under Send, choose ‘send email chasers’

2) The screen will show a list of fields you need, e.g. ‘to’ which is the email address of the recipient, ‘subject’ etc. Create all of these as calculations

3) The list will also include a couple of fields you’ll need to add to the table: count (a number) and last chased (a date). These will be used internally by the system

That will set up the system to send emails. Now make sure that only things you want to chase are chased, by adding any relevant filters to the view. e.g. to show anything out of date

That will be enough to get emails sending out. What about receiving the information back?

Simply add any fields you want the recipient to supply to the view, from the parent table. The system will automatically create a form containing these fields (excluding any of the chaser fields above such as ‘to’ and ‘subject’). When the recipient receives an email, it will have a link in it that takes them to this form.

If you want the content that the chaser sends out to be visible to users, add the chaser view into a form as a cross referenced field. It will show information such as the email to be sent, how many times the record’s been chased, when the last time was etc. If the record doesn’t need to be chased it will instead show the reasons why (the filters from the view).

Finally tick ‘enable this chaser’ at the bottom of the view’s Send section, in the manage tab. That’s it!

Please let us know what uses you find for the chaser, or of course if you come across any issues or can think of any further enhancements.