Applies To:
  • CitectSCADA 1.00, 1.01, 1.10, 1.11, 2.00, 2.01

This information assumes to know how to use the Citect debugging kernel.

CPU over load can be caused by several problems. (1) The CPU may be underpowered for your requirements, and the CPU may have to be upgraded to a more powerful model. (2) You may have some poorly designed Cicode that is consuming too much CPU - some redesign will fix this problem. (3)There may be unnecessary PLC communication occurring that is generating excessive CPU loading (especially caused by DISK I/O Devices) - redesign of variable usage in Cicode may fix this problem. (4) Running some tasks faster than required may also cause CPU overload - tuning these tasks will fix this problem.

First you must find out what is the cause of the CPU overload. You can verify CPU overload by watching the CPU level in the debug kernel on Page general. You can also trend the CPU level with the CitectInfo() function for long term view of the CPU loading of the computer. (Note trend the CPU of the computer that is calling the CitectInfo function. If you don't want to trend the CPU of the Trend Server, then: On the computer you want to trend, call CitectInfo in a Cicode task - on that computer - and write the value into a disk PLC that the trend server will trend).

For good system response, the CPU loading should be around 20% to 40%. The CPU will peak when some activity occurs, for example when changing pages, running reports, trends start flushing to disk, alarms are being processing etc. The size of the peaks will depend on how complex and how big your project is. It will also depend on what the Citect computer is doing. The largest users of CPU tend to be the Alarm Server, the I/O Server (depending on the type of protocol being used), the Citect Client and any complex Cicode being executed.

The following discussion assumes that you have a network with separate alarm, trend, report, I/O and Citect clients. However, it is just as applicable to single node systems. (But you may have to adjust the calculation of the CPU because if you are running all servers then the total CPU loading will be the total of all the servers plus the Citect client.)

The alarm server will generate a near constant load as it processes all the alarms at the required period. The more alarms (and the faster the scanning) the higher the loading. Citect systems processing 20,000 alarms using 486/66 CPU at 1.0 second period will operate around 30% loading. So you can get an idea of what your loading associated with the alarm server should be. Note that the CPU loading will peak when alarms are tripped because Citect must execute the On and Off Actions Cicode, as well as log the alarms to the required devices. Use the Alarm ScanTime parameter (default=500ms) to adjust the scan time of the alarms. By slowing down the alarm scan rate, you can lower the loading on the CPU. The CPU loading is also affected by Citect Clients displaying the Alarm Page - as the alarm server must send the active alarm records over the network to these computers. If you have many clients on the alarm page, you will see the CPU loading on the Alarm server rise. Scrolling through the alarm page on many clients can cause very high loading on the alarm server, however in normal plant operation this will not occur.

The I/O Server CPU loading is highly dependent of the type of protocol being used. Most protocols are very efficient as Citect must wait for the generally slow PLC to respond before it sends the data back to the computer that requested it. High CPU loading can occur on the I/O server due to fast PLC communication. The requesting clients will ask for the data from the I/O server after they have received their last response, so there may be large traffic between clients, I/O server and the PLCs. This can occur in the case of DISK PLCs when the I/O server can respond with the data very quickly. This also occurs when you have the Units cache time set too long. As the loading is highly protocol dependent the only way to check it is with the following procedure.

While the trend server gathers data, it does not generate a lot of CPU loading because it only requests data from the I/O Server, and then stores the results in a buffer. When the trend buffers become full, the trend server must write the data to disk. This will cause peak CPU loading at a regular period, and will depend on the sample period of the trends and the size of the buffer. Citect systems processing 2000 trends at 10 second period will consume around 4% CPU loading (peaking at 10%), on a 486/66 computer. So you can get an idea of what your loading associated with the trend server should be. The trend buffer is controlled by the parameter Trend TrendBufSize and defaults to 1024. A 2 second trend will take 512 samples (two bytes per sample) or 512/60= 8.5 minutes to fill up. At this time, the trend server will flush the data to disk and increase the CPU loading. Citect will happily run with several thousand trends with this buffer size. We don't recommend changing the size of the trend buffers. Only reduce the size of the buffer if you are really short or memory (better to buy some more, its cheap!). Increasing the size will improve performance, however 1024 is the optimum size of speed vs memory usage. For the best performance you should have a correctly configured disk cache, (make sure it is large enough), and enable the write behind feature of the cache (this option is disabled by default in DOS 6.0 and later).

The trend server will also generate CPU loading from Citect clients requesting trend data. For example, when a Citect Client first displays a page with a trend, the trend server must read the data from its buffers (or from the disk) and send it over the network. This will also occur when you scroll a trend, or use the TrnGetTable() or TrnSetTable() Cicode functions. While the Citect client displays the trend, very little loading is generated - so normally there is light loading from clients. You may see the effect by displaying the CPU on the trend server and then have Citect clients scroll their trends back and forth.

The CPU loading generated by the report server is totally dependent on what the report is doing, and how often it is doing it. Large complex reports that access devices, or do many calculations, can consume almost unlimited amounts of CPU. The scheduling of the reports by Citect uses negligible amounts of CPU. See the procedure below to test for report loading.

The CPU loading generated by the Citect client will be high (up to peak of 100%) when you change pages because Citect will try to display the page as fast as possible. However while you are on the same page there should be little loading (typical 0 to 10%). 

In the debug kernel, first check the average CPU loading on Page General (typically 20% to 40%). For a long term view, you should trend this value using the CitectInfo("general", "", 0) function. The best way is to create a Cicode task on all computers (that call this function) in a loop that writes the value into a disk unit that the trend server will then trend. For example:

while TRUE do
   ComputerX = CitectInfo("General", "", 0);
   ! unique variable for each computer
   ! don't forget to sleep or this will consume heaps of CPU

If the loading is high then check the following.

Display the Page Table Stats page. This page shows the cycle and execution times of the Alarms, Trends, Cicode and Page animation. You can estimate the amount of CPU each task is using by checking how fast the count is incrementing, and what the average execution time is. For example, if the average execution time is 100 ms, and the count is incrementing 2 times a second, then this task is generating a 20% CPU load. Look for any unusually high values (say greater than 20%). The tasks are associated with the following processes.

Code(n) The Cicode task where (n) is the task number, task 0 is the task used for processing animation, and the stats for this task are invalid.
Citect (n) The animation task for the window number (n). This task displays all the animation for the page in that window. This is shown as Page in Citect versions before 2.0, also all windows are shown on a single line.
xxxxx Alm The alarm processing task where xxxx is the type of alarm, eg digital, analog etc.
Trend Log The trend logging task. This task writes the trend data to disk.
Tnd.Acq.(n) The trend acquisition task. This task requests the data from the I/O Server and places it in the buffer. Trends of the same time period are serviced by the same task, each new task is servicing trends at a different time period.

Watch out for high usage of the Code(n) type tasks. (These are Cicode tasks or reports you have written.) You can find out what Code(n) is associated with by displaying Page Table Cicode. This table shows each Cicode task and what the current and last user functions are, as well as the last Cicode function it executed. From this you can usually recognise what task it is. You can also call TaskHnd() in your Cicode to get the correct task number. (This is shown in the hnd field of this display.) Cicode that generates a large amount of CPU usage is usually running in a loop without any or enough Sleep(). If you have a piece of Cicode that loops continuously you should always call the Sleep(1) function (the bigger the sleep the lower the CPU load).

The Task Control Block Display accessed by 'Page Task' will display a list of all the Citect kernel threads running, and how much relative CPU each one is using. Note that this CPU always adds up to 100%, so if the CPU loading in Page General is 20% and one task on Page Task is at 10% then this task is using 10% of 20% ie 2% of the total CPU. You should page down through all the tasks looking for the ones that are using the most CPU. The following tasks are the ones to look out for.

CodeExec This kernel thread is the one that runs all Cicode tasks. High usage means that your Cicode or reports are using up the CPU. Note that normally this will be quite high (20% to 60%).
Msg.Task This kernel thread takes messages off the network. This may get high loading with a busy network.
These kernel threads are used by the I/O server to service the protocol drivers. The xxx is the name of the port associated with the protocol.
Req.Post.Task.Re This kernel thread is used to offload PLC data from the I/O server to the clients. As it offloads the data, it will also call the associated Cicode that requested the data. This will include the animation of the page and the starting of reports and other Cicode threads. This task is normally high user of CPU.
Show.xxxx These kernel threads are used to display the data in the debug kernel. For example, the data you are looking out now. These tasks may be high users of CPU, however you should ignore them as you only display these pages while doing diagnostic work.
Trend.Log.Write The kernel thread used for writing trend data to disk.
Trend.Acq The kernel thread used for get trend data from the I/O server.
Dev.Spool The kernel thread used for spooling logged data to the device system. For example - from alarm logging.