Start a conversation

High Swap Memory Usage on MGR Server

Overview

You are experiencing a high swap usage of beyond 50% and there are many Out of memory errors observed on the MGR server console.

 

Solution

High swap usage can be caused due to multiple reasons - one of them being memory leaks at the OS level.

The swap space in Linux is used when the amount of physical memory (RAM) is full. If the system needs more memory resources and the RAM is full, inactive pages in memory are moved to the swap space. In the case of applications with high memory utilization, swap space can allow memory to be swapped out to disk to delay or prevent the termination of applications by the OS. When the MGR gets out of memory it kills the web access (httpd) idle processes which can hang and occupy memory with no use.

In order to resolve this:

  • It is recommended to reboot the systems once every 6 months.
  • Increase the RAM on the server and accordingly then increase the swap space as well.
  • As a workaround, to reduce the swap memory consumption you can restart the MGR application as root user using the below commands:
    • /usr/TextPass/bin/tp_mgr_stop
    • /usr/TextPass/bin/tp_mgr_start

If the issue is occurring even after the reboots and higher memory allocation and the swap usage is getting high within a short period of time, then open a Support ticket with the below information:

  • uname -a > /tmp/support_information.txt
  • date >> /tmp/support_information.txt
  • uptime >> /tmp/support_information.txt
  • ps auxwww >> /tmp/support_information.txt
  • mount >> /tmp/support_information.txt
  • dmesg >> /tmp/support_information.txt
  • top -b -n 5 >> /tmp/support_information.txt
  • vmstat 1 5 >> /tmp/support_information.txt
  • iostat -x -m 1 5 >> /tmp/support_information.txt
  • mpstat -A 1 5 >> /tmp/support_information.txt
  • df -kh >> /tmp/support_information.txt
  • getenforce >> /tmp/support_information.txt
  • rpm -qa |grep -i mysql >> /tmp/support_information.txt
  • find / -name mysql >> /tmp/support_information.txt
  • pidstat -dhru >> /tmp/support_information.txt
  • cat /pro/meminfo >> /tmp/support_information.txt
  • free -h >> /tmp/support_information.txt
  • cat /proc/sys/vm/swappiness >> /tmp/support_information.txt
  • sysctl -a >> /tmp/support_information.txt
  • cat /proc/<MYSQL PID>/smaps >> /tmp/support_information.txt
  • cat /proc/swaps >> /tmp/support_information.txt
  • mysql --login-path=mysql_root -e "show variables;" >> /tmp/support_information.txt
  • A copy of the /etc/my.cnf file from the MGR node.
  • Execute a memory monitoring script for 24 hours to indicate processes that are consuming high memory or eventually get killed by OS. The following script can be executed as textpass user:
    Open a new file using the vi mem_top.sh command and include the following lines:
    #! /bin/bash
    while true; do
    date +"%D %H:%M:%S"
    echo "$(top -o %MEM -b -n1)"
    sleep 5
    done

    Save it using the :wq! command and execute with the following command - nohup ./mem_top.sh > <hostname>_mem_top.txt &
    To stop it's execution kill the PID identified by the previous command using kill -9 <PID>
    Note: Consider a disk partition with enough space for the output file which can take around 2GB for 24h monitoring.

  • Below files from the MGR node
    • /var/TextPass/MGR/tp_mgr_change_log.txt
    • /var/TextPass/MGR/tp_mgr_error_log.txt
  • Logs under /var/TextPass/MGR/logs

<supportagent>

If all of the above information doesn't provide any conclusive information, then the ticket should be escalated to the PS team.

</supportagent>

Choose files or drag and drop files
Was this article helpful?
Yes
No
  1. Priyanka Bhotika

  2. Posted
  3. Updated

Comments