The containers start normally and serve traffic correctly for a while.
After some time, requests hang and the services become unreachable.
Restarting the containers fixes the issue temporarily.
No errors appear in the application logs.
Lost your password? Please enter your email address. You will receive a link and will create a new password via email.
Please briefly explain why you feel this question should be reported.
Please briefly explain why you feel this answer should be reported.
Please briefly explain why you feel this user should be reported.
This happens because the host machine is running out of memory and the Linux OOM killer is silently terminating container processes.
In cloud VMs, Docker containers share the host’s memory unless limits are explicitly set. When memory pressure increases, Linux kills whichever process it considers least important, which is often a containerized app. Docker does not always report this clearly, so from the outside it looks like the service just froze.
You can confirm this by checking the VM’s system logs:
dmesg | grep -i kill
If you see messages about processes being killed due to memory, that’s the cause. The fix is to set proper memory limits and ensure the VM has enough RAM for peak load:
docker run -m 1g --memory-swap 1g myapp
In Kubernetes, this is done through resource requests and limits. Without them, nodes can overcommit memory and start killing pods unpredictably.
A less obvious variation is memory leaks inside the container, which slowly push the host into OOM even if the initial footprint looks fine.