Complete Guide to Bash tail Command

Introduction to tail

The tail command is a fundamental Unix/Linux utility used to display the last part of files. It's essential for log file monitoring, real-time file watching, and extracting data from the end of files. Understanding tail thoroughly is crucial for system administration, debugging, and log analysis.

Key Concepts

  • Default Behavior: Shows last 10 lines of a file
  • Real-time Monitoring: Follow mode for live updates
  • Multiple Files: Can display tails of several files simultaneously
  • Byte-based Output: Can work with bytes instead of lines
  • Integration: Often used in pipelines with other commands

1. Basic Usage

Displaying Last Lines

#!/bin/bash
# Create a sample file
seq 1 100 > numbers.txt
# Basic tail - last 10 lines (default)
tail numbers.txt
# Specify number of lines
tail -n 20 numbers.txt
tail --lines=20 numbers.txt
# Shorthand for 20 lines
tail -20 numbers.txt  # Note: no 'n' before number
# Start from a specific line
tail -n +50 numbers.txt  # Show lines 50 to end
# Show last N bytes
tail -c 100 numbers.txt  # Last 100 bytes
tail --bytes=100 numbers.txt
# Show from a specific byte
tail -c +500 numbers.txt  # From byte 500 to end
# Multiple files
tail file1.txt file2.txt file3.txt
# With headers
tail -v numbers.txt  # Always show header
tail -q numbers.txt  # Never show header (quiet)

Common Examples

#!/bin/bash
# View last 20 lines of a log file
tail -20 /var/log/syslog
# Combine with other commands
tail -n 50 app.log | grep ERROR
# Monitor multiple log files
tail -f /var/log/nginx/access.log /var/log/nginx/error.log
# Extract last 1000 lines and save
tail -n 1000 large_log.txt > last_1000_lines.txt
# View last 5 lines of multiple files with headers
tail -v -n 5 file1.txt file2.txt file3.txt
# Show last 1KB of a file
tail -c 1024 bigfile.dat
# Display last 10 lines and follow
tail -10f app.log
# Skip first 100 lines, show the rest
tail -n +101 data.txt

2. Essential Options

Common Options Reference

#!/bin/bash
# -n, --lines: Number of lines
tail -n 50 file.txt
tail --lines=50 file.txt
# -c, --bytes: Number of bytes
tail -c 1024 file.txt
tail --bytes=1K file.txt  # GNU extension (1 Kilobyte)
# -f, --follow: Follow file as it grows
tail -f logfile.log
# -F: Follow with retry (for log rotation)
tail -F /var/log/syslog
# -q, --quiet: Never print headers
tail -q file1.txt file2.txt
# -v, --verbose: Always print headers
tail -v file.txt
# --pid: Stop following when process dies
tail -f --pid=1234 logfile.log
# -s, --sleep-interval: Sleep between checks
tail -f -s 2 logfile.log  # Check every 2 seconds
# --retry: Keep trying if file becomes inaccessible
tail --retry -f logfile.log
# -z, --zero-terminated: Line delimiter is NUL, not newline
tail -z file.txt
# Combine options
tail -fn 50 app.log
tail -F -s 5 /var/log/system.log

Option Combinations

#!/bin/bash
# Real-time monitoring with line limit
tail -f -n 1000 app.log
# Follow with custom sleep interval
tail -f -s 0.5 logfile.log  # Check every 0.5 seconds
# Follow and truncate output
tail -f app.log | while read line; do
echo "[$(date)] $line"
done
# Multiple files with headers
tail -v -f access.log error.log
# Follow until process ends
long_running_process &
tail -f --pid=$! output.log
# Follow with byte limit
tail -f -c 1024 logfile.log
# Retry on log rotation
tail -F -n 0 /var/log/syslog  # Start from end, retry

3. Real-time Monitoring with -f

Basic Following

#!/bin/bash
# Simple follow mode
tail -f application.log
# Follow multiple files
tail -f /var/log/{syslog,auth.log,mail.log}
# Follow with timestamps
tail -f logfile.log | while read line; do
echo "$(date '+%Y-%m-%d %H:%M:%S') - $line"
done
# Colorize output while following
tail -f app.log | awk '
/ERROR/ {print "\033[31m" $0 "\033[0m"; next}
/WARN/  {print "\033[33m" $0 "\033[0m"; next}
/INFO/  {print "\033[32m" $0 "\033[0m"; next}
{print}
'
# Follow and filter
tail -f app.log | grep --line-buffered ERROR
# Follow with notification on pattern
tail -f logfile.log | while read line; do
if echo "$line" | grep -q "ERROR"; then
notify-send "Error detected" "$line"
fi
done

Advanced Following Techniques

#!/bin/bash
# Monitor log and trigger actions
monitor_log() {
local logfile="$1"
local pattern="$2"
local command="$3"
tail -F "$logfile" | while read line; do
if echo "$line" | grep -q "$pattern"; then
echo "Pattern detected: $line"
eval "$command"
fi
done
}
# Usage
monitor_log "/var/log/auth.log" "Failed password" "echo 'Security alert!' | mail -s 'Alert' admin"
# Follow with aggregation
tail -f access.log | awk '
{
ip[$1]++
if (ip[$1] > 100) {
print "High traffic from:", $1
}
}'
# Follow multiple logs with labels
tail -f /var/log/nginx/access.log /var/log/nginx/error.log | awk '
FILENAME ~ /access/ {print "[ACCESS]", $0}
FILENAME ~ /error/  {print "[ERROR] ", $0}
'
# Follow with rate limiting
tail -f app.log | awk '
{
current_time = systime()
if (current_time - last_time > 60) {
print "--- Minute summary ---"
print "Lines:", count
count = 0
last_time = current_time
}
count++
}
'

4. Working with Multiple Files

Displaying Multiple Files

#!/bin/bash
# Basic multiple file display
tail file1.txt file2.txt file3.txt
# With custom line count for each
tail -n 20 file1.txt file2.txt
# Headers automatically included
tail -v file1.txt file2.txt  # Force headers
tail -q file1.txt file2.txt  # Suppress headers
# Follow multiple files
tail -f log1.log log2.log log3.log
# Format output
tail -q -n 5 file1.txt file2.txt | awk '
BEGIN {file=""}
FILENAME != file {file=FILENAME; print "\n==> " file " <=="}
{print}
'
# Process each file separately
for file in *.log; do
echo "=== $file ==="
tail -n 5 "$file"
done
# Combine tails into one
tail -q -n 100 *.log > combined_last_100_lines.txt

Comparing Multiple Files

#!/bin/bash
# Compare tails of files
compare_tails() {
local files=("$@")
local lines=10
for file in "${files[@]}"; do
echo "=== $file ==="
tail -n "$lines" "$file"
echo
done
}
# Show differences in file ends
diff <(tail -n 50 file1.txt) <(tail -n 50 file2.txt)
# Monitor multiple logs with context
watch_logs() {
local files=("$@")
while true; do
clear
date
for file in "${files[@]}"; do
echo "--- $file ---"
tail -n 20 "$file"
echo
done
sleep 2
done
}
# Usage
watch_logs /var/log/syslog /var/log/auth.log

5. Byte-based Operations

Using -c Option

#!/bin/bash
# Last N bytes
tail -c 100 file.txt
tail --bytes=100 file.txt
# Human-readable sizes (GNU extension)
tail -c 1K file.txt   # 1024 bytes
tail -c 1M file.txt   # 1048576 bytes
tail -c 1G file.txt   # 1073741824 bytes
# From specific byte position
tail -c +1000 file.txt  # From byte 1000 to end
# Extract binary data
tail -c +513 -c 512 image.jpg  # Extract 512 bytes from position 513
# Check file size and tail appropriately
file_size=$(stat -c%s largefile.dat)
bytes_to_show=$((file_size / 10))  # Show last 10%
tail -c "$bytes_to_show" largefile.dat
# Working with non-text files
tail -c 1000 binary_file.bin | hexdump -C
tail -c +1000 -c 500 binary_file.bin > chunk.bin
# Combine with head to extract middle portion
head -c 2000 file.bin | tail -c 500  # Bytes 1500-2000

Practical Byte Examples

#!/bin/bash
# Extract footer from file
extract_footer() {
local file="$1"
local footer_size="${2:-512}"
tail -c "$footer_size" "$file" > footer.bin
echo "Extracted $footer_size bytes footer"
}
# Check file signature
check_signature() {
local file="$1"
local expected_sig="$2"
local sig=$(tail -c 4 "$file" | hexdump -v -e '4/1 "%02x"')
if [ "$sig" = "$expected_sig" ]; then
echo "Signature matches"
else
echo "Signature mismatch: $sig != $expected_sig"
fi
}
# Split file into header and footer
split_file() {
local file="$1"
local header_size="${2:-1024}"
local total_size=$(stat -c%s "$file")
local footer_size=$((total_size - header_size))
head -c "$header_size" "$file" > "${file}.header"
tail -c "$footer_size" "$file" > "${file}.footer"
echo "Split $file into header and footer"
}
# Show file encoding detection from end
tail -c 1000 file.txt | file -

6. Integration with Other Commands

Piping with tail

#!/bin/bash
# Filter tail output
tail -f app.log | grep ERROR
# Process tail output line by line
tail -n 1000 access.log | awk '{print $1}' | sort | uniq -c | sort -rn
# Count occurrences in tail
tail -n 1000 app.log | grep -c ERROR
# Transform tail output
tail -f logfile.log | sed 's/ERROR/\x1b[31m&\x1b[0m/g'
# Multiple pipes
tail -n 5000 bigfile.txt | sort | uniq | head -20
# Use with xargs
tail -n 100 filelist.txt | xargs ls -la
# Create pipeline for real-time analysis
tail -f access.log | awk '
{requests[$7]++}
{print "\033[2J\033[H"; for (r in requests) printf "%s: %d\n", r, requests[r]}
'
# Monitor and alert
tail -f logfile.log | while read line; do
if echo "$line" | grep -q "CRITICAL"; then
echo "$line" | mail -s "Critical Alert" [email protected]
fi
done

Using with Other Commands

#!/bin/bash
# Find most recent files and show their tails
find . -name "*.log" -type f -mmin -60 | xargs tail -n 50
# Watch directory for new files and tail them
inotifywait -m /var/log -e create | while read path action file; do
if [[ "$file" =~ \.log$ ]]; then
tail -f "$path/$file" &
fi
done
# Combine with watch
watch -n 5 'tail -n 20 /var/log/syslog'
# Use with ssh to monitor remote logs
ssh user@server "tail -f /var/log/app.log"
# Monitor multiple remote logs
for host in server1 server2 server3; do
ssh "$host" "tail -f /var/log/app.log" | sed "s/^/$host: /" &
done
# Aggregate logs from multiple sources
tail -f /var/log/*.log | awk '{print strftime("%Y-%m-%d %H:%M:%S"), $0}'

7. Working with Compressed Files

Using with Compression Tools

#!/bin/bash
# View end of compressed log
gzcat access.log.gz | tail -n 50
zcat access.log.gz | tail -n 50
# Follow compressed file (uncompress on the fly)
while true; do
gzcat logfile.gz | tail -n 1
sleep 60
done
# Tail multiple compressed files
for file in *.gz; do
echo "=== $file ==="
zcat "$file" | tail -n 10
done
# Monitor compressed log as it grows
tail_compressed() {
local file="$1"
local lines="${2:-10}"
if [[ "$file" =~ \.gz$ ]]; then
zcat "$file" | tail -n "$lines"
else
tail -n "$lines" "$file"
fi
}
# Tail rotated logs with compression
tail_rotated() {
local base="$1"
local lines="${2:-50}"
# Get current log
if [ -f "$base" ]; then
tail -n "$lines" "$base"
fi
# Check compressed rotated logs
for log in "$base".[0-9]*.gz; do
if [ -f "$log" ]; then
echo "=== $log ==="
zcat "$log" | tail -n "$lines"
fi
done
}
# Usage
tail_rotated "/var/log/syslog" 20

8. Performance and Large Files

Efficient Large File Handling

#!/bin/bash
# For very large files, tail is efficient (doesn't read whole file)
tail -n 1000000 hugefile.txt > last_million.txt
# Time comparison
time tail -n 1000 largefile.txt > /dev/null
time head -n -1000 largefile.txt > /dev/null  # Negative head not standard
# Use with dd for binary files
dd if=largefile.dat bs=1M skip=$((total_mb - 10)) 2>/dev/null | tail -c 1M
# Check file size before tailing
file_size=$(stat -c%s largefile.txt)
if [ $file_size -gt 100000000 ]; then
echo "File is very large, tailing last 1000 lines only"
tail -n 1000 largefile.txt
else
cat largefile.txt
fi
# Memory-efficient processing
tail -f hugefile.log | while read line; do
# Process one line at a time
process_line "$line"
done
# Limit memory usage with buffer
tail -n 100000 hugefile.txt | split -l 10000 - chunk_

Performance Optimization

#!/bin/bash
# Benchmark different approaches
benchmark_tail() {
local file="testfile.dat"
local size="100M"
# Create test file
dd if=/dev/zero of="$file" bs=1M count=100 2>/dev/null
echo "=== Tail Performance ==="
time tail -n 1000 "$file" > /dev/null
time tail -c 1M "$file" > /dev/null
# Compare with alternatives
time sed -n '9000,10000p' "$file" > /dev/null 2>&1
time awk 'NR>9000' "$file" > /dev/null 2>&1
rm "$file"
}
# Use appropriate buffer sizes
export _POSIX2_LINE_MAX=4096  # For better line handling
# Avoid unnecessary pipes for large files
# Slow
tail -n 1000000 large.log | grep ERROR
# Fast
grep ERROR large.log | tail -n 1000000

9. Script Examples

Log Monitoring Script

#!/bin/bash
# Comprehensive log monitor
LOG_DIR="/var/log"
ALERT_EMAIL="[email protected]"
TMP_FILE="/tmp/log_monitor.$$"
# Configuration
declare -A LOG_PATTERNS=(
["error"]="ERROR|Error|error|Failed|failed"
["auth"]="Failed password|authentication failure|Invalid user"
["conn"]="Connection refused|timeout|unreachable"
["disk"]="No space left|I/O error|disk full"
)
declare -A LOG_ACTIONS=(
["error"]="send_alert"
["auth"]="log_security"
["disk"]="check_disk"
)
monitor_logs() {
local log_files=("$@")
for log in "${log_files[@]}"; do
if [ ! -f "$log" ]; then
echo "Warning: $log not found"
continue
fi
echo "Monitoring: $log"
tail -F "$log" | while read line; do
timestamp=$(date '+%Y-%m-%d %H:%M:%S')
# Check each pattern
for pattern_name in "${!LOG_PATTERNS[@]}"; do
pattern="${LOG_PATTERNS[$pattern_name]}"
if echo "$line" | grep -qE "$pattern"; then
handle_match "$pattern_name" "$log" "$line" "$timestamp"
fi
done
done &
pids[$log]=$!
done
# Wait for all tail processes
for pid in "${pids[@]}"; do
wait "$pid"
done
}
handle_match() {
local pattern="$1"
local log="$2"
local line="$3"
local timestamp="$4"
echo "[$timestamp] $pattern detected in $log: $line"
# Execute action if defined
if [ -n "${LOG_ACTIONS[$pattern]}" ]; then
"${LOG_ACTIONS[$pattern]}" "$pattern" "$log" "$line" "$timestamp"
fi
}
send_alert() {
local pattern="$1"
local log="$2"
local line="$3"
local timestamp="$4"
cat << EOF | mail -s "Alert: $pattern in $log" "$ALERT_EMAIL"
Time: $timestamp
File: $log
Pattern: $pattern
Line: $line
EOF
}
log_security() {
local line="$3"
echo "$line" >> /var/log/security_alerts.log
}
check_disk() {
df -h | mail -s "Disk Alert" "$ALERT_EMAIL"
}
# Main
log_files=(
"/var/log/syslog"
"/var/log/auth.log"
"/var/log/nginx/error.log"
"/var/log/mysql/error.log"
)
monitor_logs "${log_files[@]}"
# Cleanup on exit
trap 'kill ${pids[@]} 2>/dev/null' EXIT

Log Rotation Helper

#!/bin/bash
# Helper script for managing log rotation with tail
ROTATE_DIR="/var/log/archive"
CURRENT_LOG="/var/log/application.log"
monitor_with_rotation() {
local log="$1"
local rotate_after="${2:-86400}"  # 24 hours default
local check_interval=60
echo "Monitoring $log with rotation every $rotate_after seconds"
while true; do
start_time=$(date +%s)
# Start tail in background
tail -F "$log" &
tail_pid=$!
# Wait for rotation interval
sleep "$rotate_after"
# Rotate log
if [ -f "$log" ]; then
archive_name="$ROTATE_DIR/$(basename "$log").$(date +%Y%m%d-%H%M%S)"
mv "$log" "$archive_name"
touch "$log"
echo "Rotated $log to $archive_name"
fi
# Kill old tail and restart
kill "$tail_pid" 2>/dev/null
wait "$tail_pid" 2>/dev/null
done
}
# Monitor with automatic reconnection
tail_with_retry() {
local file="$1"
local retry_interval="${2:-5}"
while true; do
if [ -f "$file" ]; then
echo "Tailing $file (PID: $$)"
tail -F "$file"
else
echo "Waiting for $file to appear..."
fi
sleep "$retry_interval"
done
}
# Usage
# monitor_with_rotation "$CURRENT_LOG" 3600
# tail_with_retry "/var/log/intermittent.log"

10. Edge Cases and Error Handling

Common Issues and Solutions

#!/bin/bash
# Issue: File doesn't exist
# Solution: Check existence and use -F for retry
if [ -f "app.log" ]; then
tail -f app.log
else
echo "Log file not found, waiting..."
tail -F app.log 2>/dev/null || sleep 5
fi
# Issue: Permission denied
# Solution: Check permissions and use sudo if needed
if [ -r "/var/log/syslog" ]; then
tail -f /var/log/syslog
else
sudo tail -f /var/log/syslog
fi
# Issue: Binary file warnings
# Solution: Use -q to suppress warnings
tail -q binary.dat 2>/dev/null
# Issue: Very long lines
# Solution: Adjust buffer or use other tools
export _POSIX2_LINE_MAX=4096
tail -f app.log | cut -c1-200
# Issue: File truncated while following
# Solution: Use -F which handles truncation
tail -F /var/log/application.log
# Issue: No newline at end of file
# Solution: tail handles this gracefully
tail -n 1 file_without_newline.txt
# Issue: Race conditions in log rotation
# Solution: Use -F and handle signals
tail_with_rotation() {
local file="$1"
while true; do
tail -F "$file" &
TAIL_PID=$!
inotifywait -e move_self "$file" 2>/dev/null
kill "$TAIL_PID"
wait "$TAIL_PID" 2>/dev/null
done
}

Error Handling in Scripts

#!/bin/bash
# Robust tail wrapper
safe_tail() {
local file="$1"
local lines="${2:-10}"
local follow="${3:-false}"
# Validate input
if [ -z "$file" ]; then
echo "Error: No file specified" >&2
return 1
fi
# Check if file exists
if [ ! -e "$file" ]; then
echo "Error: File $file does not exist" >&2
return 1
fi
# Check if readable
if [ ! -r "$file" ]; then
echo "Error: Cannot read $file (permission denied)" >&2
return 1
fi
# Check if it's a regular file
if [ ! -f "$file" ] && [ ! -p "$file" ]; then
echo "Error: $file is not a regular file or pipe" >&2
return 1
fi
# Execute tail with appropriate options
if [ "$follow" = "true" ]; then
tail -F -n "$lines" "$file"
else
tail -n "$lines" "$file"
fi
}
# Usage with error handling
if ! safe_tail "/var/log/app.log" 50 true; then
echo "Failed to tail log file" >&2
exit 1
fi
# Monitor with timeout
timeout_tail() {
local file="$1"
local timeout="${2:-30}"
local lines="${3:-10}"
timeout "$timeout" tail -f -n "$lines" "$file" 2>/dev/null
local exit_code=$?
if [ $exit_code -eq 124 ]; then
echo "Tail timed out after $timeout seconds" >&2
fi
return $exit_code
}
# Retry logic
tail_with_retry() {
local file="$1"
local max_retries="${2:-5}"
local retry_count=0
while [ $retry_count -lt $max_retries ]; do
if [ -f "$file" ] && [ -r "$file" ]; then
tail -F "$file"
return 0
fi
retry_count=$((retry_count + 1))
echo "Attempt $retry_count: $file not accessible, retrying in 5s..." >&2
sleep 5
done
echo "Max retries reached, giving up" >&2
return 1
}

11. Advanced Techniques

Custom Tail Implementations

#!/bin/bash
# Pure Bash tail implementation
bash_tail() {
local file="$1"
local nlines="${2:-10}"
local lines=()
if [ ! -f "$file" ]; then
echo "File not found: $file" >&2
return 1
fi
while IFS= read -r line; do
lines+=("$line")
if [ ${#lines[@]} -gt $nlines ]; then
lines=("${lines[@]:1}")
fi
done < "$file"
printf '%s\n' "${lines[@]}"
}
# Efficient tail for large files (seek to end)
seek_tail() {
local file="$1"
local lines="${2:-10}"
local block_size=512
local file_size=$(stat -c%s "$file" 2>/dev/null)
local position=$file_size
local line_count=0
local output=""
while [ $position -gt 0 ] && [ $line_count -lt $lines ]; do
position=$((position - block_size))
[ $position -lt 0 ] && position=0
output=$(dd if="$file" bs=1 skip="$position" 2>/dev/null)
line_count=$(echo "$output" | wc -l)
done
echo "$output" | tail -n "$lines"
}
# Follow implementation in pure Bash
bash_follow() {
local file="$1"
local last_size=0
while true; do
if [ -f "$file" ]; then
current_size=$(stat -c%s "$file" 2>/dev/null)
if [ $current_size -gt $last_size ]; then
dd if="$file" bs=1 skip="$last_size" 2>/dev/null
last_size=$current_size
fi
fi
sleep 1
done
}
# Usage
# bash_tail "/var/log/syslog" 20
# bash_follow "/var/log/app.log"

Pattern-based Tail

#!/bin/bash
# Tail until pattern
tail_until_pattern() {
local file="$1"
local pattern="$2"
local lines=()
tail -f "$file" | while read line; do
echo "$line"
if echo "$line" | grep -q "$pattern"; then
echo "Pattern found, stopping."
kill $$
break
fi
done
}
# Tail from pattern
tail_from_pattern() {
local file="$1"
local pattern="$2"
local found=0
tail -f "$file" | while read line; do
if [ $found -eq 1 ]; then
echo "$line"
elif echo "$line" | grep -q "$pattern"; then
found=1
echo "Found pattern, showing from here:"
echo "$line"
fi
done
}
# Context-aware tail
tail_with_context() {
local file="$1"
local pattern="$2"
local before="${3:-5}"
local after="${4:-5}"
# Get line numbers of matches
matches=$(grep -n "$pattern" "$file" | cut -d: -f1)
for line_num in $matches; do
start=$((line_num - before))
[ $start -lt 1 ] && start=1
end=$((line_num + after))
echo "--- Context around line $line_num ---"
sed -n "${start},${end}p" "$file"
echo
done
}
# Usage
# tail_until_pattern "logfile.log" "Shutdown complete"
# tail_from_pattern "logfile.log" "Initialization complete"
# tail_with_context "app.log" "ERROR" 3 3

12. Best Practices and Tips

Shell Configuration

# ~/.bashrc additions
# Aliases for common tail operations
alias tl='tail -n 50'              # Last 50 lines
alias tlf='tail -f'                  # Follow
alias tlf100='tail -f -n 100'        # Follow last 100
alias tlF='tail -F'                   # Follow with retry
alias tla='tail -f /var/log/*.log'    # Follow all logs
# Function to tail with grep
tlg() {
tail -f "$1" | grep --line-buffered "${2:-ERROR}"
}
# Function to tail multiple logs with labels
tlmulti() {
for file in "$@"; do
tail -f "$file" | sed "s/^/[$file] /" &
done
wait
}
# Function to tail and timestamp
tldate() {
tail -f "$1" | while read line; do
echo "$(date '+%Y-%m-%d %H:%M:%S') - $line"
done
}
# Function to tail and highlight
tlcolor() {
tail -f "$1" | awk '
/ERROR/ {print "\033[31m" $0 "\033[0m"; next}
/WARN/  {print "\033[33m" $0 "\033[0m"; next}
/INFO/  {print "\033[32m" $0 "\033[0m"; next}
{print}
'
}
# Function to tail last N lines of all files in directory
tldir() {
local dir="${1:-.}"
local lines="${2:-20}"
find "$dir" -type f -name "*.log" -print0 | while IFS= read -r -d '' file; do
echo "=== $file ==="
tail -n "$lines" "$file"
echo
done
}
# Complete with file names
complete -f tlg tlmulti tldate tlcolor

Productivity Tips

#!/bin/bash
# 1. Monitor specific keywords in real-time
tail -f app.log | grep --line-buffered -E 'ERROR|WARN|CRITICAL'
# 2. Watch log with statistics
tail -f access.log | awk '
{requests[$1]++}
NR % 10 == 0 {system("clear"); for (ip in requests) print ip, requests[ip]}
'
# 3. Monitor log and send email on pattern
tail -f app.log | while read line; do
if echo "$line" | grep -q "FATAL"; then
echo "$line" | mail -s "Fatal error" [email protected]
fi
done
# 4. Combine tail with other monitoring tools
tail -f app.log | tee -a /var/log/app.log | logger -t app
# 5. Monitor multiple servers
for server in web{1..5}; do
ssh "$server" "tail -f /var/log/nginx/access.log" | sed "s/^/$server: /" &
done
# 6. Create log summary every minute
while true; do
echo "--- $(date) ---"
tail -n 100 app.log | grep ERROR | wc -l
sleep 60
done
# 7. Monitor disk usage while tailing
tail -f app.log &
watch -n 10 df -h
# 8. Check if file is being written
last_size=0
while true; do
size=$(stat -c%s app.log 2>/dev/null)
if [ "$size" != "$last_size" ]; then
echo "File size changed: $size bytes"
last_size=$size
fi
sleep 5
done

Common Patterns and Recipes

#!/bin/bash
# 1. Last N lines with line numbers
cat -n file.txt | tail -n 20
# 2. Show last lines of multiple files with filename
find . -name "*.log" -exec tail -n 20 {} \; -print
# 3. Monitor new files in directory
inotifywait -m /var/log -e create | while read dir action file; do
echo "New log file: $file"
tail -f "$dir/$file" &
done
# 4. Rotate log and continue tailing
rotate_and_tail() {
local log="$1"
local backup="$log.$(date +%Y%m%d-%H%M%S)"
cp "$log" "$backup"
> "$log"
echo "Rotated to $backup"
tail -f "$log"
}
# 5. Tail with buffer flush for pipes
tail -f app.log | stdbuf -oL grep ERROR | while read line; do
echo "Found error: $line"
done
# 6. Check if file is growing
check_growth() {
local file="$1"
local interval="${2:-60}"
local threshold="${3:-1024}"
size1=$(stat -c%s "$file" 2>/dev/null)
sleep "$interval"
size2=$(stat -c%s "$file" 2>/dev/null)
growth=$((size2 - size1))
if [ $growth -lt $threshold ]; then
echo "Warning: Slow log growth ($growth bytes/$interval sec)"
fi
}
# 7. Tail and rotate based on size
tail_and_rotate() {
local file="$1"
local max_size="${2:-100M}"
tail -F "$file" | while read line; do
echo "$line"
size=$(stat -c%s "$file")
if [ $size -gt $(numfmt --from=iec "$max_size") ]; then
mv "$file" "$file.old"
touch "$file"
echo "Rotated log file (size exceeded $max_size)"
fi
done
}
# 8. Monitor specific time window
monitor_time_window() {
local file="$1"
local start_time="$2"
local end_time="$3"
awk -v start="$start_time" -v end="$end_time" '
$0 ~ "^" start, $0 ~ "^" end {print}
' "$file" | tail -n 50
}

Conclusion

The tail command is an essential tool for any system administrator or developer working with log files and text data:

Key Takeaways

  1. Default Behavior: Shows last 10 lines of files
  2. Line Control: -n for custom line counts
  3. Byte Control: -c for byte-based operations
  4. Follow Mode: -f for real-time monitoring
  5. Multiple Files: Can display tails of several files
  6. Error Handling: -F for robust log rotation handling
  7. Performance: Efficient for large files
  8. Integration: Excellent for pipelines
  9. Scripting: Essential for log monitoring scripts
  10. Flexibility: Works with files, pipes, and special files

Best Practices Summary

  1. Use -F instead of -f for production log monitoring (handles rotation)
  2. Limit output with -n to avoid overwhelming displays
  3. Combine with grep for pattern filtering
  4. Use in pipelines for complex data processing
  5. Handle errors in scripts (permissions, missing files)
  6. Consider performance for very large files
  7. Use appropriate buffer sizes for real-time processing
  8. Clean up background processes in scripts
  9. Test with sample data before production
  10. Document complex pipelines for maintainability

The tail command's simplicity belies its power. When combined with other Unix tools and used in scripts, it becomes an indispensable part of system monitoring, log analysis, and data extraction workflows.

Leave a Reply

Your email address will not be published. Required fields are marked *


Macro Nepal Helper