Applies To:
  • CitectSCADA 3.xx, 4.xx, 5.xx

Summary:
How do I get my Citect system to perform better? 

Solution:
This is an attempt to compile a comprehensive list of information about getting the best possible data transfer performance out of your Citect system.

Citect performance means different things to different people. Some people consider good performance to be how fast Citect displays PLC data on a graphic screen. To others the term performance may refer to how many trends Citect can sample at a certain speed or how many alarms Citect can process. Some might consider reliability as the ultimate performance indicator. To clear up these scope related questions, this article will be confined to speed related issues.

Citect as a system is almost entirely client driven. By this I mean that the users of data in a Citect system determine how fast data should be delivered and the nature of that data. A client displaying a graphic will require the data needed to animate the page at a rate defined by the scan time. It requests only this data of the associated I/O server(s) which endeavours to produce it. An alarm server or trend server is also a user of Citect data. These machines have a relatively constant requirement so that they can continue with their respective tasks. As it happens, some alarm information is pushed out to clients from the server but this is the only instance of this kind of data transaction.

So how fast will my Citect run? How much load will it introduce on to the supporting hardware? Well.... it depends. It depends on how you are making use of the data the system can provide. Consider a whole system, from client to PLC.

The Client

Typically a client of data is one of a number of types. It can be a plain display client, which only requires data for displaying graphic pages, or occasionally alarm or trend pages. It can be a Citect server, ie. an alarm or trend server which has a constant demand for certain data points. It can also be a combination of these, say an alarm server which is also used as a display. Maybe it is a trend server as well as an alarm server. Each of these configurations will have unique data requirements, not to mention transient when you also attempt to estimate the load introduced by the display.

Display clients are fairly basic. They simply ask for the (usually small) amount of data needed to continually draw the screen in front of the operator. The speed with which this happens is usually defined by the [Page]ScanTime parameter (default 250 ms), but if the I/O server cannot deliver the data within this time then this turnaround time becomes the determining factor. The page refresh routine in the client submits a list of points it needs to the request manager which then sends the request to the I/O server. You can influence the speed of the request manager by modifying the delay under [Req]Delay (default 50 ms). By setting the [Page]Scantime and [Req]Delay to zero and one respectively, the screen will update as fast as the I/O server can furnish the data necessary. Each scan of the page generates an associated request to the I/O server, so heavy handed use of this parameter can cause excessive network load. See Q1890 for more information. Don't forget, the human eye only scans at around 25Hz, so updates faster than this would be of dubious benefit.

At project compile time, Citect prepares a number of requests which are associated with particular pages. As a general rule of thumb Citect will prepare one request for all the page's digital data from unit1, another request for all the page's analog data from unit1, still another request for all the page's digital data from unit2 and so on. At runtime this set of requests is asked for to supply the page data. These requests are short and over twenty can fit into one network frame. Therefore pages need to be quite complex and require data from many different sources before this becomes an issue.

In the same way, the rate at which alarms are scanned can also be altered through use of the [Alarm]ScanTime (default 500 ms) parameter. Typically, it will make little or no difference to efficient and safe plant operation if this is set as high as 2000 (2 seconds). This setting has the potential to radically affect system performance due to the amount of data which is required by the alarm server. For example, setting this parameter to 1000 (1 second) will apparently reduce the amount of data required by the alarm server by 50%. The actual reduction in load on the I/O delivery system will be somewhat less than this, depending on various optimisations, but it will still be significant.

A similar parameter does not exist with respect to the trend server because trend sample rates are determined by the project, not the machine's settings. If you feel the trend server is slowing things down because of how much data it is using, then you will have to change the trend tag definitions, delete the trend files and start again. This is therefore not a setting which can be easily changed and careful consideration should be given to sampling rates when the project is being designed.

There are other ways a client can be a user of data, including report serving, cicode and events. However considering the extremely transient and unpredictable nature of these functions I will not consider them further.

The Network (Client to Server)

A large number of installed Citect systems are networked to provide redundancy and 'data everywhere'. We must therefore consider the network as a possible performance problem. Generally, even large Citect systems rarely suffer from network performance problems. This is because Citect is an efficient user of network bandwidth. Even a client requesting data to support a refresh rate of 20 Hz will only drive utilisation to between 10 and 15%. This bandwidth was being shared with other users, including the I/O server to device comms! Using default settings for scan time and so on, each client will probably contribute about 0.5% to utilisation as a general rule of thumb. Add to this the options afforded to you by fast ethernet, FDDI and ATM and you seldom suffer a bandwidth related hassle. However, you can apply ordinary network tuning principles to this kind of problem and come up with some good results. High performance sites make use of fast ethernet switches, bridges and hubs to compartmentalise the load and provide dedicated trunks between I/O servers and heavy users of Citect data, such as alarm and trend servers.

Under certain circumstances the protocol in use may be hindering good performance. Microsoft has tuned most of its protocols to deliver good performance when used in a typical office environment. File transfer and print sharing requires a few large chunks of data, not the hundreds of small packets that Citect generates. NetBEUI, IPX and TCP/IP can sometimes be manually modified to provide better realtime response. This is especially the case when these protocols are used on Windows for Workgroups, since more recent revisions of these protocols have adaptive algorithms which can modify behaviour on the fly. In general terms, the protocols seek to save bandwidth by joining an acknowledgement packet with another packet which happens to be going the same direction. If no convenient packet is forthcoming, the protocol eventually tires of waiting and sends an ack on its own. This wait period can be up to 100 milliseconds, translating into a substantial performance hit to realtime data distribution. There are other extensions to IPX, NetBEUI and TCP/IP such as windowing and nageling which can affect your network communication speed as well. Under Windows for Workgroups, these extensions are easy to switch off for IPX and NetBEUI and detailed instructions for doing so can be found in Knowledge base articles Q1711 and Q1721. The behaviour of TCP/IP under Workgroups cannot be modified so compensation must be made in Citect by changing the number of Network Buffers. You can find more specific information about this in Q1874. Ordinarily these buffer settings are by default set to their optimum levels after performance testing by CiT, so resist the urge to experiment unless you are really desperate.

Versions of protocols which ship with Windows 95 or Windows NT 4 have been modified in the same way as previously discussed. However, despite our discussions with Microsoft we have lost some of the options as far as manual alteration of protocol behaviour is concerned. In spite of this, these protocols perform quite well in general, especially TCP/IP under NT4, which we regard as the best performing protocol for Citect at present. There has been some experimentation with this configuration because so many people wish to use it, and we have found that it can offer impressive performance without any tweaking on our part.

Recent versions of both Citect and Windows have seen the increase (or complete removal) of many network related memory limitations. For example, [Lan]WritePool and [Lan]ReadPool buffers for 32 bit versions of Citect are now 256 by default, with a maximum of 1024. 16 bit versions currently reserve 64 buffers for reading and writing, to conserve critical memory. Past versions of Citect defaulted these settings to 32 buffers or less. For this reason you are unlikely to have to modify these settings now.

Other settings which may be used to modify network performance are [Lan]SesSendBuf and [Lan]SesRecBuf. These settings determine the number of working buffers (inside Citect's NetBIOS layer) devoted to handling transmit operations and receive operations with respect to each network session. For example, if SesSendBuf = 4, then the system will allocate 4 buffers to handle transmissions. The buffers will be occupied as each send operation is performed, until none remain. Citect will then wait until one returns before continuing. In this way Citect can handle momentary surges in network traffic; setting this higher may not yield better performance under other circumstances. It also allows a kind of pending command process by which overhead incurred in the transmission of network messages is minimised. With Citect version 5 under NT, SesSendBuf is 32 by default (in previous versions and under other operating systems it is 2). SesRecBuf has an equivalent function but for receival of messages, and defaults to 2. Raising this has no benefit and may depress performance. Conversely, setting SesRecBuf to 1 doesn't really affect performance greatly but can improve network reliability. We occasionally try setting it to 1 if the network is experiencing problems (like packets received out of order). Modifying these settings may become necessary to overcome poor performance in Citect when using older versions of the product or the operating system. See Q1711, Q1721 and Q1736 for more on this topic. In general the SesSendBuf should rarely need to be increased above 10. If you do, go slowly. This uses up memory (under WFW or Win95 with 16 bit Citect) which may be needed elsewhere.

The I/O Server

As the data gathering brains of your Citect system, the I/O server is a critical organ, performance wise. There are a number of ways to tune your I/O server(s), including the use of driver parameters, server distribution and load splitting.

The protocol driver is a chunk of software which manages communication between the Citect I/O server and the I/O device. This relatively small subsystem operates in semi-separation from the rest of Citect, receiving requests from the I/O server, queueing them, sending them and receiving the associated responses. Each driver shipped with Citect has a number of parameters which you can change to affect the way the driver performs each of the aforementioned tasks. As before with the network defaults, the driver settings are tested thoroughly to provide best possible performance under most circumstances and in general you should not need to change these settings. Nevertheless, I shall attempt to briefly describe the main driver parameters which could substantially alter performance.

Block is a definition of the basic chunk size which the I/O server can expect the I/O device to deal with. Due to speeds on some proprietary communications mediums, protocol overheads sometimes add a fair bit of time to general comms. To get around this, the I/O server will attempt to ask for one big chunk of data which can satisfy several different requests. Typically this parameter is a trade off between the number of requests between the I/O server and the I/O device and the size of those requests. The I/O server is capable of considering requests from several different clients or even wildly different requests from the same client and combining them into a few large 'Block' sized requests, thus making the most efficient use of the channel. Unless you are using a protocol in a manner not originally envisaged by our developers, do not modify this parameter.

TransmitDelay (or simply Delay) is a parameter to actually slow the driver down so that the I/O device can keep up. It is mainly designed for serial protocols - some yield a timeout if this is set too small. Ordinarily, drivers will submit a new request immediately after receiving a reply. This can bog some devices down and therefore this delay is introduced to keep the I/O device happy. The driver will wait this amount of time before proceeding with the next request. Most drivers have this set to zero (milliseconds) anyway but there some notable exceptions. Reducing this may give you trouble - it is usually there for a reason.

MaxPending is an abbreviation of Maximum Pending Commands. Some I/O devices maintain a kind of internal queue of commands which are serviced one after another. The driver therefore knows it can send a certain number of requests before expecting any response. This means that during the time the I/O device is busy building the response to a previous request, the channel can be used to prepare another request for immediate action. It is like sending requests in parallel. This parameter can be especially effective when the channel is a little slower ie. when communication time is a meaningful fraction of the total request service time. Also, the ability to queue pending commands tends to distort the channel usage for the driver (numbers over 100% are not uncommon).

Typically, drivers fall into one of three categories with respect to MaxPending. Category one contains those that do not support more than one pending command, and must respond to a request before another is sent. This is true of MODBUS and most others. In this case a MaxPending queue may be implemented within the driver to 'fake it', thereby at least saving some driver overhead. In fact, MODBUS performance will be hindered if the MaxPending is set to anything but 2. Regarding category two, some drivers are designed to interface to devices which do support a certain number of pending commands. Allen-Bradley devices are an example of this. Depending on the media over which communications is taking place, it may be justifiable to modify (raise) this number. If no benefits are forthcoming however, it is preferable to return to default settings. The third category contains more advanced drivers which allow the packing of mulitple individual requests into one communication frame. TITCP/IP can do this, as up to 14 NITP frames are packed into one CAMP frame. This is the third form of max pending.

The Timeout parameter determines how long the driver will wait for a response before declaring it overdue and asking again (if Retries is set - see next section). If your comms method is particularly slow you may wish to push this time out somewhat to avoid getting too many timeout errors. Conversely, if you have adapted a protocol usually used over slower links for a high speed link you may wish to shorten this down to gain a truer report on link performance.

Retries is the number of attempts the driver will make to the I/O device before considering that it has not responded. As mentioned above, Citect will wait for [Timeout] until performing a retry. Once this Timeout x Retries period has expired Citect will raise a hardware alarm notifying you that the device has entered an error state and is not responding. This parameter can be increased if you feel Citect needs to persist more before causing an alarm, but normally the default setting is appropriate.

PollTime determines how often the driver checks the port for incoming traffic. For the best performance this should be set to zero, which means the driver waits for the port to interrupt it when something comes in. This 'interrupt mode' is the most efficient means of operation, but not all drivers support it. If a driver requires a PollTime, reductions in this setting may deliver some benefit. Having this number set low (without setting it to zero) will be accompanied by an increase in CPU usage. As before, this setting is preset by CiT to deliver the best performance under most circumstances.

In many smaller networked systems, one machine will be an I/O, Alarm, Trend and Report server and the other machines will be pure display clients. Under these circumstances, your network will be very lightly loaded, while depending on the project, the server machine might be struggling a bit. In this case, performance may be improved by experimenting with different server locations. By this I mean try moving a large processing load like the trend server off the I/O server machine and onto one of your client machines. The same could be done for the alarm server. Very large systems sometimes have up to eight or ten servers dedicated to these tasks. Two will be I/O servers, two will be alarm servers etc etc etc. This kind of task delegation is easy to do since all it requires is a modification to the .ini file. Use the Setup Wizard to do this. Also, if you are considering moving your trend server to another machine, the trend files (if they are held locally) will have to be moved also.

You may find that the system is requiring a large amount of data from a particular device and this is overloading the I/O server comms channel, even after you have moved your alarm and trend servers to different locations. In this sort of situation you may want to consider moving some of the load from one I/O server to another. This option will require a separate channel to the I/O device in question, but with the growth of ethernet communications in process control this is trivial. You can then define the tags to be coming from the same device but via a different server. This will force clients to appeal to I/O server 1 for some of their data and I/O server 2 for the rest. This plan can be extended to (practically) any number of I/O servers, thereby limiting your communications performance bottleneck to the port on the I/O device itself.

There is another thing which you may want to consider, that is caching. The I/O server can cache data from any particular I/O device and service client requests out of the cache instead of sending a request to the device itself. Typically the cache is set to around 300 ms but you can configure this to any number you desire. This can be tuned to correspond with how quickly you expect the data to change in the PLC. Using the cache can also be handy if you have data from different I/O device displayed on the same page. One device might respond quickly and the other less so. To facilitate fast page displays but avoid having the fast always waiting for the slow, set the cache on the slow device(s) to a number a little less than the device response time. That way when the client requests the data for the next display, the I/O server will be more likely to hit the cache the 'slower' data. Consult Q1068, Q1070, Q1972 for more information on tuning caches.

The Channel (I/O server to I/O device)

Citect supports many protocols and methods with respect to communicating to I/O devices, and there are many more besides. Even within one manufacturers offering, there may be half a dozen different methods of talking. Given this, your choice of hardware and software to facilitate comms may have a large effect on your system performance.

Let me just say at this point that I presume you will ensure that your design minimises any possible interference with traffic. That is, good clear serial lines, well terminated fibre, properly segmented networks etc etc etc. After that, go for the fastest arrangement you can afford to ensure the best performance.

Ethernet in general offers superior performance and is quite popular at present. Many PLC manufacturers have ethernet communications solutions as part of their current product lineup. Using ethernet allows flexibility and in some cases economy in comms. It is also fast. Some plants deploy fibre networks with hubs, switches and so on to manage I/O server to PLC comms and this is probably about as good as you can get. Don't be fooled into thinking that ethernet is a panacea though. Q1823 explains that dedicated serial lines may be just as effective in ensuring good performance if the processors that you are talking to do not make the best use of the available bandwidth. Also, think how your ethernet system is integrated. Consider an Allen Bradley network consisting of a Pyramid Integrator connected to a set of PLC5s via DH+. These DH+ segments run at 57k while Citect talks to the Integrator via ethernet at 10Mb. Requests are handled by the Integrator and then passed onto the lower bandwidth channels for processing. The ethernet is therefore running at a fraction of its potential, waiting for the slower comms to deliver the data. This setup might've been better serviced by going direct to the individual PLC5s via ethernet.

Obviously financial considerations and geography play a part in this kind of conjecture but even so, ethernet should be used carefully or expected gains may not materialise.

The I/O Device

Last in the line, but most important, is the PLC itself. I have already mentioned that Citect is always on the lookout for optimising communications, especially by asking for large chunks of data with a mind to satisfying multiple client requests in one go. You can significantly improve Citect's chances by grouping similar data in the I/O device memory. By this I mean designating a set of registers specifically intended for alarms. Yet another set could hold all your trend tags. In these two blocks a large amount of your data resides. Citect can then make one read request for all the digital alarms, and another for all the trends. Since these two requests are always going to be happening, you can make things easier by grouping them conveniently. Once Citect has to make multiple requests for these basic requirements you start to lose that best possible speed. This sort of thing must be done at the start of a project however, and is not usually an option for a retrofit on a legacy system. Remember, the aim here is to reduce the number of reads Citect has to do to support typical operation. This is probably the single most important thing you can do to improve the speed of your system.

In general, Citect can support a very fast update on a page, but the limiting factor will be the speed with which the associated device(s) can turn a request around. In fact typically it is this which limits a system's performance. You should not expect great comms from a PLC which is already struggling to maintain a set scan time. Some PLCs allow you to set aside a certain amount of time per scan to handle communications. You may be able to bias your machine in favour of comms if this really becomes a problem.

Conclusion

In general you will do well if you take a holistic approach to Citect performance. Remember that Citect is client driven, and eventually all that data has to come from somewhere. Consider which data is frequently used and start with that. You may need segmented networks and extra I/O servers to keep up. Some data loads are constant like alarms and trends and you can plan carefully for these. Others are transient like display page data - these are more difficult to optimise but in general these do not impose a large load on the system. Citect will optimise at compile time and run time to allow best possible performance from the I/O server, but you will have to back it up by providing good fast channels to your sources and ensuring they are not too busy to answer.

 

Keywords:
 

Attachments