Overview
This article provides step-by-step instructions for monitoring and logging JetPatch connector processes on Linux endpoints when high CPU usage is suspected. It includes methods for identifying process IDs, collecting performance data, and analyzing results to determine if intervention is needed.
Identifying JetPatch Process IDs
To find the PIDs of JetPatch processes, run the following command:
| ps -ef | grep intigua |
The output will look similar to this example:
[root@linux home]# ps -ef | grep intigua root 800 1 0 Dec27 ? 00:00:12 /usr/local/intigua/vAgentManager/PackageManager/vlink/vlink/bin/connector64 process root 32254 2049 0 13:39 pts/0 00:00:00 grep --color=auto intigua --> Not relevant |
| Tip: The PIDs for processes that include the text 'intigua' (except for the grep command itself, which usually shows as "grep --color=auto intigua") are used in the next step as part of the -p<pid> flag. |
Collecting Performance Data
For each PID identified in step 1, run a top command to collect CPU, memory, I/O, and other statistics on the JetPatch processes:
| top -b -p<pid> -n 10 > topresults.txt |
Using the -b flag, this command runs in batch mode and not on screen.
You can increase the number of samples by changing the number in the -n flag (in the example above, 10 iterations were chosen).
Each iteration captures the average of the last 3 seconds. We recommend using at least 20 samples, capturing an overall 1-minute process behavior.
The -p<pid> flag should be set to the actual PID (for example, -p800 according to the example above).
Once the top command is completed, the prompt will return.
You can view the output file (in the example above: topresults.txt, but you can choose a different output file name) by using a text editor or simple cat or vi commands.
| Note: You can store the file to a complete path or if you just give a file name it will store it in the current path, so be mindful of where you are running the top command. |
Example Output
Here is an example of the file output:
[root@liunx home]# cat topresult.txt top - 13:02:59 up 4 days, 11:38, 1 user, load average: 0.01, 0.02, 0.00 Tasks: 1 total, 0 running, 1 sleeping, 0 stopped, 0 zombie %Cpu(s): 0.0 us, 6.2 sy, 0.0 ni, 93.8 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st MiB Mem : 1989.5 total, 1195.8 free, 242.1 used, 551.5 buff/cache MiB Swap: 1640.0 total, 1640.0 free, 0.0 used. 1485.2 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 800 root 20 0 104188 7876 7488 S 0.0 0.4 0:12.03 connector64
top - 13:03:02 up 4 days, 11:38, 1 user, load average: 0.01, 0.02, 0.00 Tasks: 1 total, 0 running, 1 sleeping, 0 stopped, 0 zombie %Cpu(s): 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st MiB Mem : 1989.5 total, 1195.8 free, 242.2 used, 551.5 buff/cache MiB Swap: 1640.0 total, 1640.0 free, 0.0 used. 1485.2 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 800 root 20 0 104188 7876 7488 S 0.0 0.4 0:12.03 connector64
top - 13:03:05 up 4 days, 11:38, 1 user, load average: 0.01, 0.02, 0.00 Tasks: 1 total, 0 running, 1 sleeping, 0 stopped, 0 zombie %Cpu(s): 0.3 us, 0.0 sy, 0.0 ni, 99.3 id, 0.0 wa, 0.3 hi, 0.0 si, 0.0 st MiB Mem : 1989.5 total, 1195.8 free, 242.2 used, 551.5 buff/cache MiB Swap: 1640.0 total, 1640.0 free, 0.0 used. 1485.2 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 800 root 20 0 104188 7876 7488 S 0.0 0.4 0:12.03 connector64 |
Analyzing the Results
If the average CPU over time, meaning in all or most of the sample instances (10, 20, ...) is above 80%, repeat this operation several times: 4-5 times (change the output file name to: topresults1.txt, topresults2.txt, ...).
If this behavior is consistent (high CPU) across all instances, then open a case with these findings with JetPatch support.
However, if the CPU behavior is bursty in nature, but on average it does not cross the 80% threshold, then your system is performing normally, and there is no cause for alarm.
Comments
0 comments
Please sign in to leave a comment.