How to Check System File Descriptor Limits on Linux Server

Step-by-step guide to check system file descriptor limits. Monitor file descriptor usage, detect limit exhaustion, and prevent "too many open files" errors.

Last updated: 2026-01-11

How to Check System File Descriptor Limits on Linux Server

Monitor system file descriptor limits to track file descriptor usage, detect limit exhaustion, and prevent "too many open files" errors. This guide shows you how to check file descriptor limits and set up automated monitoring.

For monitoring file descriptor usage, see Monitor File Descriptor Usage. For checking process file descriptors, see Check Open File Descriptors.

Why Checking File Descriptor Limits Matters

File descriptor limits prevent processes from opening unlimited files. When limits are exhausted, applications can fail with "too many open files" errors. Monitoring limits helps prevent these failures.

Method 1: Check File Descriptor Limits

Check System Limits

# Check current file descriptor limit
ulimit -n

# Check hard limit
ulimit -Hn

# Check soft limit
ulimit -Sn

# Check limits for specific process
cat /proc/<pid>/limits | grep "open files"

Check System-Wide Limits

# Check system-wide limit
cat /proc/sys/fs/file-max

# Check current file descriptor usage
cat /proc/sys/fs/file-nr

# Calculate usage percentage
MAX=$(cat /proc/sys/fs/file-max)
USED=$(cat /proc/sys/fs/file-nr | awk '{print $1}')
PERCENT=$(echo "scale=2; $USED * 100 / $MAX" | bc)
echo "File descriptor usage: $PERCENT% ($USED/$MAX)"

Method 2: Check Process File Descriptor Usage

Monitor Process File Descriptors

# Count open file descriptors for process
lsof -p <pid> | wc -l

# Find processes with many open files
for pid in $(pgrep -f <process_name>); do
  COUNT=$(lsof -p $pid 2>/dev/null | wc -l)
  echo "PID $pid: $COUNT file descriptors"
done

# Find process with most open files
lsof | awk '{print $2}' | sort | uniq -c | sort -rn | head -10

Method 3: Automated File Descriptor Monitoring with Zuzia.app

Set up automated monitoring to track file descriptor usage continuously and receive alerts when usage approaches limits.

Step 1: Add File Descriptor Monitoring Command

  1. Log in to Zuzia.app Dashboard

    • Access your Zuzia.app account
    • Navigate to your server
    • Click "Add Scheduled Task"
  2. Configure File Descriptor Check Command

    MAX=$(cat /proc/sys/fs/file-max)
    USED=$(cat /proc/sys/fs/file-nr | awk '{print $1}')
    echo "scale=2; $USED * 100 / $MAX" | bc
    
    • Set execution frequency (every 30-60 minutes)
    • Configure alerts when usage exceeds thresholds

Step 2: Configure Alert Thresholds

  • Warning: File descriptor usage > 70%
  • Critical: File descriptor usage > 85%
  • Emergency: File descriptor usage > 95%

Step 3: Monitor Process Limits

Add command to check process file descriptor limits:

# Check file descriptor limit for specific process
cat /proc/<pid>/limits | grep "open files" | awk '{print $4}'

Best Practices for File Descriptor Monitoring

1. Monitor File Descriptor Usage Continuously

  • Track file descriptor usage regularly
  • Alert when usage approaches limits
  • Monitor usage trends over time
  • Plan capacity upgrades based on data

2. Monitor Process File Descriptors

  • Track file descriptor usage per process
  • Identify processes with many open files
  • Monitor file descriptor leaks
  • Optimize process file descriptor usage

3. Set Appropriate Limits

  • Configure limits based on application requirements
  • Monitor usage vs limits
  • Adjust limits based on actual usage
  • Plan upgrades before limits are reached

4. Optimize File Descriptor Usage

  • Close file descriptors properly
  • Monitor file descriptor leaks
  • Optimize application file handling
  • Implement file descriptor pooling

Troubleshooting File Descriptor Issues

Step 1: Identify File Descriptor Problems

When file descriptor limits are approached:

# Check current usage
cat /proc/sys/fs/file-nr

# Check system limit
cat /proc/sys/fs/file-max

# Find processes with many open files
lsof | awk '{print $2}' | sort | uniq -c | sort -rn | head -10

Step 2: Resolve File Descriptor Issues

Based on investigation:

  1. Increase File Descriptor Limits:

    # Increase system-wide limit
    echo "fs.file-max = 1000000" >> /etc/sysctl.conf
    sysctl -p
    
    # Increase process limit
    ulimit -n 65536
    
  2. Fix File Descriptor Leaks:

    • Review application code
    • Fix unclosed file descriptors
    • Optimize file handling
  3. Optimize File Descriptor Usage:

    • Close file descriptors properly
    • Implement file descriptor pooling
    • Optimize application file handling

FAQ: Common Questions About File Descriptor Limits

How often should I check file descriptor limits?

For production servers, continuous automated monitoring is essential. Zuzia.app can check file descriptor usage every 30-60 minutes, alerting you when usage approaches limits.

What is considered high file descriptor usage?

High file descriptor usage depends on your system limit. Generally, usage above 70% of the limit indicates potential issues and should be investigated.

How do I increase file descriptor limits?

Increase file descriptor limits by editing /etc/sysctl.conf for system-wide limits, or /etc/security/limits.conf for per-user limits. Then apply changes with sysctl -p or by restarting services.

Can file descriptor monitoring impact server performance?

File descriptor monitoring commands have minimal impact on server performance when done correctly. Use appropriate monitoring frequency and avoid monitoring during peak usage periods.

Note: The content above is part of our brainstorming and planning process. Not all described features are yet available in the current version of Zuzia.

If you'd like to achieve what's described in this article, please contact us – we'd be happy to work on it and tailor the solution to your needs.

In the meantime, we invite you to try out Zuzia's current features – server monitoring, SSL checks, task management, and many more.

We use cookies to ensure the proper functioning of our website.