Using CitectSCADA > Communicating with I/O Devices > Communications Performance Considerations > How data caching works
How data caching works

Data caching prevents unnecessary rereads of I/O Device data within a short period of time. Unnecessary reads can be generated when more than one client requests the I/O Server to read data from a PLC or similar I/O Device within a short (typically 300 ms) period of time.

Normally, upon request from a client, an I/O Server reads status data from an I/O Device, and passes it back to the requesting client.

If the server receives subsequent requests from other clients before the original data is returned to the first client, it optimizes the read by automatically sending the original data back to requesting clients. (Page General Blocked Reads shows this count).

If a client requests the same data immediately after the server returned the data to a client, the server rereads the device unnecessarily.

Setting the data cache time to 300 ms (or similar) prevents identical repetitive reads within that cached time frame. If further clients request the identical data from the same server up to 300 ms after the server has sent that data to an earlier client, the cached data on the server is sent immediately in response to the subsequent requests.

Note: Multiple clients don't have to be separate CitectSCADAs on a network. They may be the alarms and trend clients in the same computer, so this optimization will affect even a single node system.

CitectSCADA also uses read-ahead caching. When the data in the cache is getting old (close to the cache time), the I/O Server will re-request it from the I/O Device. This optimizes read speed for data that is about to be re-used (frequent). To give higher priority to other read requests, the I/O Server requests this data only if the communication channel to the I/O Device is idle.