Application controls refer to the transactions and data relating to each computer-based application system and are, therefore, specific to each such application. The objectives of application controls, which may be manual or programmed, are to ensure the completeness and accuracy of the records and the validity of the entries made therein.
Application controls are controls over the input, processing, and output functions. From the 30,000 foot view they include things like:
- Ensure the input data is complete, accurate and valid
- Ensure the internal processing produces the expected results
- Ensure the processing accomplishes the desired tasks
- Ensure output reports are protected from disclosure
From the close inspection view, they include such things as:
- Edit tests
- Control totals/batch balancing
- Reconciliation of accounts
- Exception handling
Both automated controls and manual procedures should be used to ensure proper coverage. These controls help ensure data accuracy, completeness, validity, verifiability, and consistency and thus ensure the confidentiality, integrity, and availability of the application and its associated data.
So what is an application? Since, as we’ve said before, it is a computer-based system which processes data for a specific business purpose. Let’s give a few examples of some application systems:
- General Ledger
- Fixed Assets
- Inventory Control
- Manufacturing Resource Planning (MRP)
- Distribution Requirements Planning (DRP) and no that’s not Disaster Recovery Plan
- Human Resources
- And, everyone’s favorite – Payroll…
Business applications have the same three basic risks as any other system which handles data and they are confidentiality, integrity, and availability.
Confidentiality from the point of view of a data breach or a release of data in violation of legal regulations such as the Federal Privacy Act or FERPA or HIPAA.
Integrity from the point of view that the data can be relied upon for accuracy and availability from the point of view that the data is available when it is needed.
When we talk about input controls for applications we must look at:
- Input Authorization
- Batch Controls and Balancing
- Error Reporting and Handling
- Batch Integrity in Online or Database systems
Authorization of input is just that, the data has been properly authorized to be input into the application system. There are a number of different things to look for here, primarily things like signatures on batch forms; online access controls; unique passwords; workstation identification and source documents. In batch controls and balancing we might look at total monetary amount; total items; total documents and hash totals. And specifically with batch balancing when some of the input might be manually we want to make sure the manual totals are in agreement with the computer totals. In error reporting and handling, we want to look for controls that determine what happens to a batch that has an error, do we reject only the transaction; do we reject the whole batch; do we hold the batch in suspense pending correction or do we just process the batch and flag the error? Some of the input control techniques include things like a transaction log; reconciliation of data; documentation; error correction procedures; anticipating; transmittal log; and cancellation of source documents.
In processing controls we look at:
- Data Validation and Editing Procedures
- Processing Controls
- Data File Control Procedures
For data validation, think SQL injection, and now you have a very clear picture of just one of the many data validation edits. Data validation is meant to identify data errors, incomplete or missing data and inconsistencies among related data items. Editing procedures are preventive controls designed to keep bad data out of your database. ISACA lists several Data Validation Edits and Controls among them are:
- Sequence check
- Limit check
- Range check
- Validity check
- Reasonableness check
- Table lookups
- Existence check
- Key verification
- Check digit
- Completeness check
- Duplicate check
- Logical Relationship check
Processing controls are there to ensure that the incoming data is processed according to “Hoyle.” No, I’m not being facetious, as Hoyle established rules for playing cards and other games, so too, do business process owners establish rules for how particular data is to be processed through the application. Some of these processing controls include run-to-run totals; limit checks; and reasonableness verification of calculated amounts.
In data file control procedures we look at such questions as, “Are you sure the master file was updated correctly?” To this, you would respond, “We made a before image copy of the database, then ran the update, and then ran an after image copy. We then compared the two images and yes the update performed as expected.” You will also run into the following other types of data file controls:
- Parity checking
- Transaction logs
- Version Usage
- File updating and maintenance authorization
In output controls, the biggest concern is; Did the information distributed to get to the appropriate recipient? So as an auditor you will need to ask the questions; Where was the sensitive report printed? Was distribution controlled? How long are the sensitive reports retained and are they stored in a protected environment? And by that I mean are they protected from disclosure? (That’s another name for confidentiality.)
The online world of transactions and databases present another and slightly different challenge for applications. Since databases consist of many tables all interrelated, the updating is not just a single table but several tables. Think commit and rollback, think failure during midstream, think I need to recover. So how do we do that? We first write the transaction to a transaction log file and then we start updating all the different tables. Once all the tables are updated successfully (atomicity), we set a flag in the transaction log to say that a particular transaction has been successfully applied. The question becomes how long to keep the transaction log file and where should it be backed up? These questions can best be answered by looking at the business impact analysis for the business process, finding the supporting applications and then finding the recovery point objective (RPO) and recovery time objective (RTO). For example, if you look at the RPO and find that the business process owner has indicated a zero tolerance for data loss, you can be assured that transaction logging will be taking place and that transaction logging will most probably be being mirrored to a hot site. As an IT auditor, it is your responsibility to determine if the application controls in place, satisfy the requirements of the RPO and RTO in the business impact analysis.
A few other areas of concern for application control are how changes to data normally are controlled? Normally they are through the application, however, this needs to be checked. Application access control mechanisms and built-in application control normally prevent unauthorized access to data. These controls can be circumvented by direct access to data. For this reason, direct access to data (specifically, “write”, “change”, and/or “delete” access) should be restricted and monitored.
So how do you test an application? There are a variety of techniques and my favorite is to write my own “Test Data” and then run it through the “Production” system. But in order to accomplish this, you will need to ensure the existence of an ITF (Integrated Test Facility). And let’s not forget SDLC in our discussion. When should you begin testing an application? As an auditor, you will want to make sure that you begin your testing of the application as soon as individual units are finished, and you can call that pre-integration testing.
Applications are here to stay, some large (SAP, PeopleSoft) and some small (QuickBooks) but there will always be applications and there should always be auditors to check that the controls are in place to ensure CIA.
There are five different “Online Auditing Techniques” for online applications. They are:
- Systems Control Audit Review File and Embedded Audit Modules (SCARF/EAM)
- Audit Hooks
- Integrated Test Facility
- Continuous and Intermittent Simulation
You would use SCARF/ERM when the complexity is very high and regular processing cannot be interrupted. An ITF would be used when the complexity is high and it is not beneficial to use test data. Snapshots give you an audit trail like taking a lot of snapshots and placing them end to end to get a movie. CIS is for medium complexity when you have transaction meeting certain criteria which need to be examined and audit hooks are for those low complexity tasks when you only need to look at selected transactions or processes.