Complete Guide to Bash head Command (Display Beginning of Files)

Introduction to head

The head command is a fundamental Unix/Linux utility used to display the first few lines of text files. By default, it shows the first 10 lines of each specified file, making it invaluable for quickly previewing file contents, checking file formats, and examining log files without loading entire files into memory.

Key Concepts

  • Default Behavior: Shows first 10 lines of each file
  • Multiple Files: Can display headers for multiple files
  • Byte Counting: Can display based on bytes instead of lines
  • Pipeline Integration: Commonly used in command pipelines
  • Efficient: Only reads necessary parts of files

1. Basic Usage

Simple head Commands

# Display first 10 lines (default)
head file.txt
head -n 10 file.txt  # Explicitly specify 10 lines
# Display first N lines
head -n 5 file.txt   # First 5 lines
head -n 20 file.txt  # First 20 lines
# Using short option
head -5 file.txt     # First 5 lines (GNU extension)
# Display multiple files
head file1.txt file2.txt
head -n 5 file1.txt file2.txt  # First 5 lines of each
# Examples
echo -e "Line 1\nLine 2\nLine 3\nLine 4\nLine 5" | head -3
# Output:
# Line 1
# Line 2
# Line 3

Common Use Cases

# Check log file header
head /var/log/syslog
# Preview CSV file before processing
head -2 data.csv
# Check script shebang
head -1 script.sh
# View configuration file beginning
head /etc/passwd
# Quick file inspection
head *.txt  # View start of all text files
# Combined with other commands
ls -la | head -5  # First 5 files in directory listing

2. Command Options

-n (Lines) Option

# Specify number of lines
head -n 15 file.txt          # First 15 lines
head -n 100 file.txt         # First 100 lines
# Negative numbers (skip lines at end)
head -n -5 file.txt          # Show all except last 5 lines
head -n -0 file.txt          # Show entire file
# Using long option
head --lines=20 file.txt
# Dynamic line count
head -n $(wc -l < file.txt) file.txt  # Show entire file
# Examples
# Show all except last 3 lines
seq 1 10 | head -n -3
# Output: 1 2 3 4 5 6 7
# Preview first and last parts of file
echo "First 5 lines:"
head -5 file.txt
echo "Last 5 lines:"
tail -5 file.txt

-c (Bytes) Option

# Display based on bytes
head -c 100 file.txt         # First 100 bytes
head -c 1K file.txt          # First 1 kilobyte
head -c 1M file.txt          # First 1 megabyte
# Negative bytes (skip at end)
head -c -50 file.txt         # All except last 50 bytes
# Using long option
head --bytes=200 file.txt
# Practical examples
# Preview binary file header
head -c 16 binary.bin | hexdump -C
# Extract file signature
SIGNATURE=$(head -c 8 file.bin)
# Check if file starts with specific pattern
if [ "$(head -c 4 file.bin)" = "\x7FELF" ]; then
echo "This is an ELF executable"
fi
# Get exact number of bytes
head -c 512 file.dat > header.dat

Multiple Files with Headers

# Headers are automatically added for multiple files
head file1.txt file2.txt file3.txt
# Output:
# ==> file1.txt <==
# line1
# ...
# ==> file2.txt <==
# line1
# ...
# Suppress headers with -q
head -q file1.txt file2.txt  # No headers
# Always show headers with -v
head -v singlefile.txt        # Show header even for single file
# Custom header format
for file in *.txt; do
echo "=== $file ==="
head -3 "$file"
echo
done

-q and -v Options

# -q (quiet) - never print headers
head -q file1.txt file2.txt > combined.txt
# -v (verbose) - always print headers
head -v single.txt
# Output:
# ==> single.txt <==
# line1
# line2
# Useful in scripts
for file in "$@"; do
head -v "$file"
echo
done
# Quiet mode for processing
process_files() {
head -q *.log | grep ERROR | sort | uniq -c
}

3. Practical Examples

Log File Inspection

#!/bin/bash
# Quick log check
check_logs() {
local logfile="$1"
echo "=== Recent Log Entries (first 20 lines) ==="
head -20 "$logfile"
echo "=== Critical Errors ==="
head -100 "$logfile" | grep -i "ERROR\|CRITICAL" | head -10
}
# Monitor new logs with head
watch_log_start() {
local logfile="$1"
local lines="${2:-50}"
echo "Last run: $(date)"
echo "First $lines lines of $logfile:"
echo "------------------------"
head -"$lines" "$logfile"
}
# Check log rotation
check_log_rotation() {
local logbase="$1"
for log in "$logbase"*; do
if [ -f "$log" ]; then
echo "=== $log (first 5 lines) ==="
head -5 "$log"
echo
fi
done
}
# Usage examples
# check_logs /var/log/syslog
# watch_log_start /var/log/apache2/access.log 100

File Format Verification

#!/bin/bash
# Check file signatures
check_file_signature() {
local file="$1"
local signature=$(head -c 4 "$file" | od -An -tx1)
case "$signature" in
*"7f 45 4c 46"*) echo "ELF executable" ;;
*"89 50 4e 47"*) echo "PNG image" ;;
*"ff d8 ff e0"*) echo "JPEG image" ;;
*"25 50 44 46"*) echo "PDF document" ;;
*"1f 8b 08"*)    echo "GZIP compressed" ;;
*)               echo "Unknown format: $signature" ;;
esac
}
# Verify CSV header
verify_csv() {
local file="$1"
local expected_header="$2"
local actual_header=$(head -1 "$file")
if [ "$actual_header" = "$expected_header" ]; then
echo "CSV header matches expected"
else
echo "Warning: CSV header mismatch"
echo "Expected: $expected_header"
echo "Actual: $actual_header"
fi
}
# Check text file encoding
check_encoding() {
local file="$1"
local bom=$(head -c 3 "$file" | od -An -tx1)
case "$bom" in
*"ef bb bf"*) echo "UTF-8 with BOM" ;;
*"fe ff"*)    echo "UTF-16 BE" ;;
*"ff fe"*)    echo "UTF-16 LE" ;;
*"00 00 fe ff"*) echo "UTF-32 BE" ;;
*"ff fe 00 00"*) echo "UTF-32 LE" ;;
*)            echo "No BOM detected (likely UTF-8)" ;;
esac
}

Data Sampling

#!/bin/bash
# Sample data for analysis
sample_data() {
local file="$1"
local sample_size="${2:-10}"
{
echo "=== Data Sample (first $sample_size records) ==="
head -n "$sample_size" "$file"
echo
echo "=== Summary ==="
echo "Total lines: $(wc -l < "$file")"
echo "Sample shows: $(( 100 * sample_size / $(wc -l < "$file") ))% of data"
}
}
# Preview CSV with formatting
preview_csv() {
local file="$1"
local lines="${2:-5}"
head -n "$lines" "$file" | column -t -s ','
}
# Sample log entries by severity
sample_by_severity() {
local file="$1"
echo "=== Log Sample ==="
echo "ERROR entries:"
grep -c "ERROR" "$file"
head -5 "$file" | grep "ERROR" || echo "  No errors in first 5 lines"
echo
echo "First 5 lines:"
head -5 "$file"
}

4. Scripting with head

Input Validation

#!/bin/bash
# Validate file before processing
validate_file() {
local file="$1"
# Check if file exists
if [ ! -f "$file" ]; then
echo "Error: File not found: $file"
return 1
fi
# Check file is not empty
if [ ! -s "$file" ]; then
echo "Warning: File is empty: $file"
return 2
fi
# Preview file
echo "File: $file"
echo "Size: $(du -h "$file" | cut -f1)"
echo "First 3 lines:"
head -3 "$file"
return 0
}
# Check if file has minimum lines
has_minimum_lines() {
local file="$1"
local min_lines="${2:-1}"
local actual_lines=$(head -n "$min_lines" "$file" 2>/dev/null | wc -l)
[ "$actual_lines" -eq "$min_lines" ]
}
# Safe head function
safe_head() {
local file="$1"
local lines="${2:-10}"
if [ -r "$file" ]; then
head -n "$lines" "$file"
else
echo "Error: Cannot read $file" >&2
return 1
fi
}

Data Extraction

#!/bin/bash
# Extract header and first data rows
extract_sample() {
local input="$1"
local output="$2"
local lines="${3:-100}"
{
echo "# Sampled from $input at $(date)"
echo "# Total lines: $(wc -l < "$input")"
head -n "$lines" "$input"
} > "$output"
}
# Extract column names from CSV
get_csv_headers() {
local file="$1"
head -1 "$file" | tr ',' '\n' | nl -w2 -s': '
}
# Extract configuration sections
extract_config_section() {
local config="$1"
local section="$2"
local context="${3:-5}"
# Find section and show context
grep -n "^\[$section\]" "$config" | while IFS=: read -r line_num _; do
start=$((line_num - context))
[ $start -lt 1 ] && start=1
end=$((line_num + context))
sed -n "${start},${end}p" "$config" | head -20
done
}

Pipeline Operations

#!/bin/bash
# Process first N records
process_first_n() {
local n="$1"
shift
head -n "$n" | process_command "$@"
}
# Example: Analyze first 1000 lines of log
analyze_log_start() {
head -1000 /var/log/syslog | \
awk '{print $1, $2, $3}' | \
sort | uniq -c | \
sort -rn
}
# Create sample file for testing
create_test_sample() {
local source="$1"
local target="$2"
local lines="${3:-100}"
echo "Creating test sample from $source"
echo "First $lines lines:"
head -n "$lines" "$source" | tee "$target"
echo "Sample saved to $target"
}
# Monitor log start
monitor_log_start() {
local logfile="$1"
local interval="${2:-5}"
while true; do
clear
echo "=== Log Monitor ($(date)) ==="
head -20 "$logfile"
sleep "$interval"
done
}

5. Advanced Techniques

Combining with Other Commands

#!/bin/bash
# Find patterns in first part of files
find_in_start() {
local pattern="$1"
shift
for file in "$@"; do
if head -50 "$file" | grep -q "$pattern"; then
echo "Found in $file (first 50 lines)"
fi
done
}
# Compare beginnings of files
compare_starts() {
local file1="$1"
local file2="$2"
local lines="${3:-10}"
diff -u <(head -n "$lines" "$file1") <(head -n "$lines" "$file2")
}
# Check file headers
check_headers() {
for file in *.txt; do
echo -n "$file: "
head -1 "$file" | cut -c1-50
done | column -t
}
# Generate statistics
first_lines_stats() {
local file="$1"
local lines="${2:-100}"
echo "=== Statistics for first $lines lines ==="
echo "Total characters: $(head -n "$lines" "$file" | wc -c)"
echo "Total words: $(head -n "$lines" "$file" | wc -w)"
echo "Average line length: $(( $(head -n "$lines" "$file" | wc -c) / lines ))"
}

Dynamic Head

#!/bin/bash
# Dynamic head based on file size
smart_head() {
local file="$1"
# Get file size in KB
local size=$(du -k "$file" | cut -f1)
if [ "$size" -lt 10 ]; then
# Small file: show all
cat "$file"
elif [ "$size" -lt 100 ]; then
# Medium file: show 20 lines
head -20 "$file"
else
# Large file: show 10 lines
head -10 "$file"
fi
}
# Percentage-based head
head_percent() {
local file="$1"
local percent="${2:-10}"
local total_lines=$(wc -l < "$file")
local show_lines=$((total_lines * percent / 100))
head -n "$show_lines" "$file"
}
# Head with context
head_with_context() {
local file="$1"
local lines="${2:-10}"
local context="${3:-2}"
head -n "$lines" "$file" | {
cat
echo "... (showing first $lines lines)"
tail -n "$context" "$file" | sed 's/^/... /'
}
}

Multiple File Processing

#!/bin/bash
# Process first lines of multiple files
process_multiple_heads() {
local pattern="$1"
shift
for file in "$@"; do
if [ -f "$file" ]; then
echo "=== $file ==="
head -5 "$file" | grep --color=auto "$pattern" || echo "  No matches"
echo
fi
done
}
# Create summary of multiple files
summary_of_starts() {
local outfile="summary_$(date +%Y%m%d).txt"
{
echo "File Summary - $(date)"
echo "======================="
echo
for file in "$@"; do
if [ -f "$file" ]; then
echo "File: $file"
echo "Size: $(du -h "$file" | cut -f1)"
echo "First 3 lines:"
head -3 "$file" | sed 's/^/  /'
echo
fi
done
} > "$outfile"
echo "Summary written to $outfile"
}
# Parallel head for large number of files
parallel_head() {
local files=(*.log)
local max_jobs=4
local job_count=0
for file in "${files[@]}"; do
(
echo "=== $file ==="
head -5 "$file"
echo
) &
job_count=$((job_count + 1))
if [ $job_count -ge $max_jobs ]; then
wait
job_count=0
fi
done
wait
}

6. Error Handling and Edge Cases

Robust Error Handling

#!/bin/bash
# Safe head with error handling
robust_head() {
local file="$1"
local lines="${2:-10}"
# Check if file exists
if [ ! -e "$file" ]; then
echo "Error: File '$file' does not exist" >&2
return 1
fi
# Check if file is readable
if [ ! -r "$file" ]; then
echo "Error: Cannot read file '$file' (permission denied)" >&2
return 1
fi
# Check if file is a regular file
if [ ! -f "$file" ]; then
echo "Error: '$file' is not a regular file" >&2
return 1
fi
# Try to read the file
if ! head -n "$lines" "$file" 2>/dev/null; then
echo "Error: Failed to read '$file'" >&2
return 1
fi
}
# Handle empty files
head_nonempty() {
local file="$1"
local lines="${2:-10}"
if [ ! -s "$file" ]; then
echo "File is empty: $file"
return 0
fi
head -n "$lines" "$file"
}
# Handle binary files
safe_head_binary() {
local file="$1"
local lines="${2:-5}"
if file "$file" | grep -q "text"; then
head -n "$lines" "$file"
else
echo "Binary file detected. Showing hexdump of first 64 bytes:"
head -c 64 "$file" | hexdump -C
fi
}

Edge Cases

#!/bin/bash
# Handle files with no newline at end
head_no_newline() {
local file="$1"
# Add newline if missing
if [ -n "$(tail -c1 "$file")" ]; then
(cat "$file"; echo)
else
cat "$file"
fi | head
}
# Handle very large line lengths
head_safe() {
local file="$1"
local lines="${2:-10}"
# Use dd for very long lines that might break head
dd if="$file" bs=8192 count=$((lines * 10)) 2>/dev/null | head -n "$lines"
}
# Handle special filenames
head_special() {
local file="$1"
# Handle files with spaces, newlines, etc.
head -n 10 -- "$file" 2>/dev/null || echo "Error reading: $file"
}
# Handle FIFOs and special files
head_fifo() {
local file="$1"
if [ -p "$file" ]; then
# FIFO - read with timeout
timeout 5 head "$file" || echo "No data available"
else
head "$file"
fi
}

7. Performance Considerations

Efficient Usage

#!/bin/bash
# Use head for large files (only reads needed data)
time head -1000 hugefile.txt > /dev/null
time cat hugefile.txt | head -1000 > /dev/null  # Same efficiency
# Avoid reading entire file when only start is needed
# Bad: reads entire file
wc -l hugefile.txt | cut -d' ' -f1
# Good: only reads start
head -1000 hugefile.txt | wc -l
# Compare performance
time (
head -1000 largefile.txt > /dev/null
)  # Fast
time (
cat largefile.txt | head -1000 > /dev/null
)  # Same speed (head stops reading)
time (
tail -1000 largefile.txt > /dev/null
)  # Slower (needs to seek to end)

Benchmarking

#!/bin/bash
# Benchmark head performance
benchmark_head() {
local file="$1"
local iterations="${2:-10}"
echo "Benchmarking head on $file"
echo "=========================="
for lines in 10 100 1000 10000; do
echo -n "head -$lines: "
time (
for ((i=0; i<iterations; i++)); do
head -$lines "$file" > /dev/null
done
) 2>&1 | grep real
done
}
# Compare with other commands
compare_read_tools() {
local file="$1"
echo "Reading first 100 lines:"
echo "head -100:"
time head -100 "$file" > /dev/null
echo "sed -n '1,100p':"
time sed -n '1,100p' "$file" > /dev/null
echo "awk 'NR<=100':"
time awk 'NR<=100' "$file" > /dev/null
echo "cat | head:"
time cat "$file" | head -100 > /dev/null
}

8. Integration with Other Commands

With grep and awk

#!/bin/bash
# Search only in first part of file
quick_grep() {
local pattern="$1"
local file="$2"
local lines="${3:-100}"
head -n "$lines" "$file" | grep --color=auto "$pattern"
}
# Awk processing of file header
process_header() {
local file="$1"
head -1 "$file" | awk -F, '{
print "Column count: " NF
for(i=1; i<=NF; i++) {
print "Column " i ": " $i
}
}'
}
# Extract and process metadata
extract_metadata() {
local file="$1"
head -50 "$file" | awk '
/^#/ {print "Comment: " $0}
/^[^#]/ {print "First data line: " $0; exit}
'
}

With sort and uniq

#!/bin/bash
# Analyze patterns in file start
analyze_start() {
local file="$1"
local lines="${2:-100}"
echo "=== Analysis of first $lines lines ==="
head -n "$lines" "$file" | sort | uniq -c | sort -rn | head -20
}
# Find common starting patterns
common_starts() {
local file="$1"
head -1000 "$file" | cut -c1-20 | sort | uniq -c | sort -rn | head -10
}
# Check header consistency
check_header_consistency() {
local file="$1"
head -100 "$file" | awk '{
if (NR==1) header = $0
else if ($0 !~ header && length($0) > 0) {
print "Line " NR " differs from header pattern"
}
}'
}

With sed and cut

#!/bin/bash
# Extract formatted preview
preview_formatted() {
local file="$1"
head -5 "$file" | sed 's/^/  /' | nl
}
# Extract specific columns from start
extract_columns() {
local file="$1"
local cols="$2"
local lines="${3:-10}"
head -n "$lines" "$file" | cut -d',' -f"$cols"
}
# Preview with line numbers
preview_numbered() {
local file="$1"
local lines="${2:-10}"
head -n "$lines" "$file" | cat -n
}

9. Real-World Applications

Log Monitoring Script

#!/bin/bash
# Comprehensive log monitoring
monitor_system_logs() {
local log_dir="${1:-/var/log}"
local temp_file="/tmp/log_summary.$$"
{
echo "System Log Summary - $(date)"
echo "=============================="
echo
# System logs
for log in syslog auth.log kern.log messages; do
if [ -f "$log_dir/$log" ]; then
echo "=== $log (first 5 lines) ==="
head -5 "$log_dir/$log"
echo
fi
done
# Application logs
echo "=== Recent Application Logs ==="
find "$log_dir" -name "*.log" -type f -size +0 2>/dev/null | head -10 | while read log; do
echo "--- $(basename "$log"): first 3 lines ---"
head -3 "$log" 2>/dev/null || echo "  Cannot read"
echo
done
# Critical errors in last day
echo "=== Critical Errors (first occurrences) ==="
find "$log_dir" -name "*.log" -mtime -1 | xargs grep -l "ERROR\|CRITICAL" 2>/dev/null | head -5 | while read log; do
echo "From $(basename "$log"):"
grep "ERROR\|CRITICAL" "$log" | head -3
echo
done
} > "$temp_file"
# Display summary
less "$temp_file"
rm "$temp_file"
}
# Usage
# monitor_system_logs /var/log

Data Quality Check Script

#!/bin/bash
# Check data file quality
check_data_quality() {
local file="$1"
local sample_size="${2:-100}"
echo "Data Quality Report - $(date)"
echo "=============================="
echo
# Basic info
echo "File: $file"
echo "Size: $(du -h "$file" | cut -f1)"
echo "Total lines: $(wc -l < "$file")"
echo
# Sample preview
echo "First $sample_size lines preview:"
echo "--------------------------------"
head -n "$sample_size" "$file" | head -5
echo "..."
tail -3 <(head -n "$sample_size" "$file")
echo
# Column analysis (for CSV)
if [[ "$file" == *.csv ]]; then
echo "CSV Analysis:"
local header=$(head -1 "$file")
local columns=$(echo "$header" | tr ',' '\n' | wc -l)
echo "Columns detected: $columns"
echo "Headers:"
head -1 "$file" | tr ',' '\n' | nl -w2 -s'. '
# Check consistency
echo
echo "Column count consistency:"
head -100 "$file" | awk -F, '{print NF}' | sort -nu | 
while read n; do
count=$(head -100 "$file" | awk -F, "NF==$n" | wc -l)
echo "  $n columns: $count rows"
done
fi
# Character set check
echo
echo "Character set check (first 1000 bytes):"
if head -c 1000 "$file" | LC_ALL=C grep -q '[^[:print:][:space:]]'; then
echo "  Warning: Non-printable characters detected"
head -c 1000 "$file" | LC_ALL=C sed -n 's/[^[:print:][:space:]]/?/gp' | head -3
else
echo "  OK: All characters printable"
fi
}

File Format Converter

#!/bin/bash
# Convert file formats with preview
convert_with_preview() {
local input="$1"
local output="$2"
local format="${3:-txt}"
echo "Converting $input to $format"
echo "=============================="
echo
# Show sample of input
echo "Input preview (first 5 lines):"
head -5 "$input"
echo
# Perform conversion based on format
case "$format" in
txt)
cp "$input" "$output"
;;
csv)
# Simple conversion - replace spaces with commas
head -10 "$input" | sed 's/ /,/g' > "$output"
;;
json)
# Convert to JSON (simplified)
{
echo "{"
echo '  "data": ['
head -5 "$input" | sed 's/.*/    "&",/' | sed '$ s/,$//'
echo '  ]'
echo "}"
} > "$output"
;;
*)
echo "Unknown format: $format"
return 1
;;
esac
# Show result preview
echo
echo "Output preview (first 5 lines):"
head -5 "$output"
}

10. Interactive Head

Interactive File Browser

#!/bin/bash
# Interactive file browser
browse_files() {
local dir="${1:-.}"
local files=()
# Build file list
while IFS= read -r file; do
files+=("$file")
done < <(find "$dir" -maxdepth 1 -type f | sort)
PS3="Select file to preview (or 0 to quit): "
select file in "${files[@]}"; do
if [ -n "$file" ]; then
clear
echo "=== $file ==="
echo "Size: $(du -h "$file" | cut -f1)"
echo "Lines: $(wc -l < "$file")"
echo "================================"
echo
# Interactive preview
preview_file "$file"
break
elif [ "$REPLY" = "0" ]; then
break
else
echo "Invalid selection"
fi
done
}
# Interactive file preview
preview_file() {
local file="$1"
local lines=10
while true; do
clear
echo "=== Preview: $file (showing first $lines lines) ==="
echo "Commands: +/next, -/prev, q/quit"
echo "----------------------------------------"
head -n "$lines" "$file"
echo "----------------------------------------"
echo -n "Command: "
read cmd
case "$cmd" in
+|next) lines=$((lines + 5)) ;;
-|prev) lines=$((lines > 5 ? lines - 5 : 5)) ;;
q|quit) break ;;
esac
done
}

Dynamic Head Viewer

#!/bin/bash
# Dynamic head viewer with refresh
dynamic_head() {
local file="$1"
local interval="${2:-2}"
local lines="${3:-20}"
while true; do
clear
echo "=== Dynamic Head Viewer: $file ==="
echo "Refreshing every $interval seconds (Press Ctrl+C to quit)"
echo "=========================================="
echo
if [ -f "$file" ]; then
head -n "$lines" "$file"
else
echo "File not found: $file"
fi
sleep "$interval"
done
}
# Head with follow mode (like tail -f but for start)
head_follow() {
local file="$1"
local lines="${2:-10}"
local last_size=0
while true; do
if [ -f "$file" ]; then
local current_size=$(stat -c%s "$file" 2>/dev/null || stat -f%z "$file" 2>/dev/null)
if [ "$current_size" -ne "$last_size" ]; then
clear
echo "=== Following start of: $file ==="
echo "File size: $current_size bytes"
echo "=================================="
echo
head -n "$lines" "$file"
last_size=$current_size
fi
fi
sleep 1
done
}

11. Testing and Debugging

Unit Tests for head Operations

#!/bin/bash
# Test framework for head-related functions
test_head_functions() {
local test_dir="/tmp/head_test_$$"
mkdir -p "$test_dir"
# Create test files
seq 1 100 > "$test_dir/numbers.txt"
printf "Line %d\n" {1..50} > "$test_dir/lines.txt"
echo -n "No newline at end" > "$test_dir/no_newline.txt"
# Test 1: Basic head
echo "Test 1: Basic head -n 5"
local result=$(head -n 5 "$test_dir/numbers.txt" | wc -l)
assert_equals 5 "$result"
# Test 2: Head with negative count
echo "Test 2: Head -n -5"
local total=$(wc -l < "$test_dir/numbers.txt")
local shown=$(head -n -5 "$test_dir/numbers.txt" | wc -l)
assert_equals $((total - 5)) "$shown"
# Test 3: Head with bytes
echo "Test 3: Head -c 10"
local bytes=$(head -c 10 "$test_dir/numbers.txt" | wc -c)
assert_equals 10 "$bytes"
# Test 4: Multiple files
echo "Test 4: Multiple files"
local files_count=$(head -q "$test_dir/numbers.txt" "$test_dir/lines.txt" 2>/dev/null | wc -l)
assert_equals 60 "$files_count"  # 50 + 10 default
# Cleanup
rm -rf "$test_dir"
}
# Assertion helper
assert_equals() {
local expected="$1"
local actual="$2"
if [ "$expected" = "$actual" ]; then
echo "  ✓ PASS"
else
echo "  ✗ FAIL (expected: $expected, got: $actual)"
fi
}
# Run tests
test_head_functions

Debugging Head Operations

#!/bin/bash
# Debug head operations
debug_head() {
local file="$1"
local lines="${2:-10}"
echo "=== Head Debug Information ==="
echo "File: $file"
echo "Lines requested: $lines"
echo
# File information
if [ -f "$file" ]; then
echo "File exists: yes"
echo "File size: $(du -h "$file" | cut -f1)"
echo "Total lines: $(wc -l < "$file")"
echo "Permissions: $(ls -l "$file" | cut -d' ' -f1)"
echo "File type: $(file -b "$file")"
else
echo "File exists: no"
return 1
fi
echo
echo "=== Actual head output (first $lines lines) ==="
time (head -n "$lines" "$file") 2>&1
echo
echo "=== Head command trace ==="
strace -e open,read head -n "$lines" "$file" 2>&1 | head -20
}
# Trace head execution
trace_head() {
local file="$1"
local lines="${2:-10}"
# Use strace to see system calls
strace -e trace=file,desc head -n "$lines" "$file" 2>&1 | 
grep -E 'open|read|close' | 
sed 's/^/  /'
}

12. Tips and Tricks

Useful One-Liners

# Quick file preview with formatting
head -5 file.txt | cat -n                          # With line numbers
head -5 file.txt | sed 's/^/  /'                   # Indented preview
head -5 file.txt | awk '{printf "%2d: %s\n", NR, $0}'  # Custom numbering
# Multiple files with custom separator
head -5 *.txt | sed '/^$/d' | grep -v '^==>'       # Remove headers
# Extract file signature
SIG=$(head -c 8 file.bin | hexdump -x | head -1)   # First 8 bytes hex
# Check if file starts with pattern
if head -1 file.txt | grep -q "^#!"; then
echo "Script detected"
fi
# Get file encoding from BOM
BOM=$(head -c 3 file.txt | od -An -tx1)
# Preview compressed file without decompressing
zcat file.gz | head -5
# Remote file preview
ssh user@host "head -20 /var/log/syslog"
# Process substitution with head
diff <(head -10 file1.txt) <(head -10 file2.txt)
# Create header file
head -1 largefile.csv > header.csv
# Sample data for testing
head -100 bigdata.txt > sample.txt
# Check for header row
if head -1 data.csv | grep -q "ID,Name,Age"; then
echo "Header matches expected"
fi

Shell Aliases and Functions

# Add to .bashrc for convenience
alias h='head'
alias h5='head -5'
alias h10='head -10'
alias h20='head -20'
# Preview with formatting
hnum() { head -n "${2:-10}" "$1" | cat -n; }
hcols() { head -n "${3:-5}" "$1" | column -t -s "${2:-,}"; }
# Head with context
hctx() { 
head -n "${2:-10}" "$1"
echo "..."
tail -n 3 "$1"
}
# Multiple files with headers
hfiles() {
for f in "$@"; do
echo "=== $f ==="
head -5 "$f"
echo
done
}
# Smart head based on file type
hsmart() {
local file="$1"
local lines="${2:-10}"
case "$file" in
*.gz|*.bz2)
zcat "$file" | head -n "$lines"
;;
*.csv)
head -n "$lines" "$file" | column -t -s ','
;;
*)
head -n "$lines" "$file"
;;
esac
}

Conclusion

The head command is a simple yet powerful utility for examining file contents:

Key Takeaways

  1. Default Behavior: Shows first 10 lines of each file
  2. Line Control: Use -n to specify number of lines
  3. Byte Control: Use -c for byte-based extraction
  4. Multiple Files: Automatically adds headers between files
  5. Efficient: Only reads necessary data from files
  6. Pipeline Friendly: Works well in command pipelines

Command Summary

OptionDescriptionExample
-n NShow first N lineshead -n 20 file.txt
-c NShow first N byteshead -c 100 file.bin
-qQuiet (no headers)head -q file*.txt
-vVerbose (always show headers)head -v file.txt
-n -NShow all except last N lineshead -n -5 file.txt

Best Practices

  1. Use for quick previews of file contents
  2. Combine with other commands for powerful pipelines
  3. Use -n for precise line counts
  4. Check file signatures with -c
  5. Validate file formats before full processing
  6. Sample data for testing and development
  7. Monitor log starts for system health

The head command's simplicity and efficiency make it an essential tool for any Unix/Linux user, from beginners checking file contents to advanced users in complex shell scripts.

Leave a Reply

Your email address will not be published. Required fields are marked *


Macro Nepal Helper