For those customers who make heavy use of the API, we’ve just released some updates to the enhanced audit trail feature which will make keeping track of API usage a lot easier.
These can help with things like
- Seeing trends in the volume of usage over time
- Identifying any performance bottlenecks
- Checking that any caching layer you have is operating correctly
In short, to keep things running smoothly, troubleshoot easily and control costs when there are complex sets of interacting software systems, many API requests per second and/or large volumes of data being transferred.
First, here’s a quick recap on the existing benefits of enabling Enhanced Audit Logging (£15/month).
- Extended retention periods (upgradable to any length of time required)
- The ability to create views and charts from the log data, so you can report on it in the same way as any other data in agileBase
For API calls, the data logged includes
- Date and time of the request
- IP address of the external client making the request
- The API view the request was made against
- Any data filters and row limit applied to the request
Additionally, the following new data fields will now be logged:
- Count – if a number of similar requests are made in quick succession to the same API view, they will be merged into one log line. The count field will then show the number of requests this line refers to
- Processing time – the total time in ms (thousandths of a second) taken to serve the request. If count is greater than one, this will be the total time for all the requests the log line pertains to
- Of which Q time – to achieve a fair level of load balancing, agileBase operates a separate API request queue for each customer. If a request arrives and the system is still busy processing a previous request, the new one gets held in a queue. This field shows how much of the total processing time (ms) was spent waiting for previous requests to complete
- Rows – the total number of database rows returned by the request(s)
- Bytes – the total size of the response(s) for each request i.e. the amount of data returned
These can all be used in charts and reporting. Please note if you want to find average processing time, queuing time, rows or bytes per request for a log line, you need to divide by count.
Hopefully this data will prove useful to customers, particularly those interacting with many third party systems or sending large volumes of data to external systems via the API, perhaps for transaction processing.
Please let us know if you’d like to see any further metrics.