If you suspect that the JetPatch connector on your Linux machine is causing high CPU, you can use simple tools to monitor JetPatch's main processes running on the endpoint and also share the data with the JetPatch support team
Step 1:
To find out the PIDs of each the JetPatch processes, run:
ps -ef | grep intigua
The output of this command will look like this example:
[root@linux home]# ps -ef | grep intigua
root 800 1 0 Dec27 ? 00:00:12 /usr/local/intigua/vAgentManager/PackageManager/vlink/vlink/bin/connector64 process
root 32254 2049 0 13:39 pts/0 00:00:00 grep --color=auto intigua --> Not relevant
The PIDs for the process that include the text 'intigua' (except for the grep command itself, usually will show like: grep --color=auto intigua this command is irrelevant
), are used in the next step as part of the -p<pid> flag.
Step 2:
For each PID from step 1, run a top command that collects cpu, memory, IO and other statistics on the Jetpatch processes. For that you need to run the command:
top -b -p<pid> -n 10 > topresults.txt
Using the -b flag, this command runs in batch mode and not on screen. You can increase the number of samples by changing the number selected in the -n flag (in the example above, 20 iterations were chosen). Each iteration captures the average of the last 3 seconds. We recommend using at least 20 samples, capturing an overall 1 minute process behavior. The -p<pid> flag should be set to -p800 according to the example above.
Once the top command is completed, the prompt will return.
You can then view the outfile (in the example above: topresults.txt, but you can choose a different output file name) by using a textual editor or of course simple cat or vi commands.
You can either store the file to a complete path or if you just give a file name it will store it in the current path, so be minded to where you are running the top command
Here is an example of the file output:
[root@liunx home]# cat topresult.txt
top - 13:02:59 up 4 days, 11:38, 1 user, load average: 0.01, 0.02, 0.00
Tasks: 1 total, 0 running, 1 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.0 us, 6.2 sy, 0.0 ni, 93.8 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
MiB Mem : 1989.5 total, 1195.8 free, 242.1 used, 551.5 buff/cache
MiB Swap: 1640.0 total, 1640.0 free, 0.0 used. 1485.2 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
800 root 20 0 104188 7876 7488 S 0.0 0.4 0:12.03 connector64
top - 13:03:02 up 4 days, 11:38, 1 user, load average: 0.01, 0.02, 0.00
Tasks: 1 total, 0 running, 1 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
MiB Mem : 1989.5 total, 1195.8 free, 242.2 used, 551.5 buff/cache
MiB Swap: 1640.0 total, 1640.0 free, 0.0 used. 1485.2 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
800 root 20 0 104188 7876 7488 S 0.0 0.4 0:12.03 connector64
top - 13:03:05 up 4 days, 11:38, 1 user, load average: 0.01, 0.02, 0.00
Tasks: 1 total, 0 running, 1 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.3 us, 0.0 sy, 0.0 ni, 99.3 id, 0.0 wa, 0.3 hi, 0.0 si, 0.0 st
MiB Mem : 1989.5 total, 1195.8 free, 242.2 used, 551.5 buff/cache
MiB Swap: 1640.0 total, 1640.0 free, 0.0 used. 1485.2 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
800 root 20 0 104188 7876 7488 S 0.0 0.4 0:12.03 connector64
.
.
.
Analyzing this file:
If the average CPU over time, meaning in all or most of the sample instances (10, 20, ...) is above 80%, repeat this operation several times: 4-5 times (change the output file name to: topresults1.txt, topresults2.txt, ...). If this behaviour is consistent (high cpu) on all instances, then open a case with these findings with JetPatch's support.
However if the CPU behaviour is bursty in nature, but on average it does not cross the 80% threshold, then your system is performing normally and there is no cause for alarm.
Comments
0 comments
Please sign in to leave a comment.