AutoRecon is a powerful automated reconnaissance tool designed to simplify and streamline the process of subdomain enumeration, URL discovery, web content analysis, and initial vulnerability scanning. This simplified version focuses on core tool orchestration, removing direct API key management for a leaner setup. It intelligently integrates with essential open-source tools to provide a comprehensive and organized workflow.
- Passive Enumeration: Leverages tools like
amass
,subfinder
, andsublist3r
. - Active Enumeration: Performs DNS brute-forcing with
dnsrecon
and virtual host enumeration withffuf
.
- Filters discovered domains to identify live and responsive web servers using
httpx
, also extracting associated IP addresses for further scanning.
- Discovers URLs from various sources using
waybackurls
,katana
,waymore
, andwaybackrobots
. - Integrates
jslinks
: Automatically extracts JavaScript files and analyzes them for potential endpoints. - Analyzes JavaScript files for sensitive information (e.g., API keys, credentials) using
SecretFinder
. - Integrates
crawler
: Optionally performs dynamic, interactive web crawling to discover more endpoints and requests.
- Performs directory and file brute-forcing on live web servers using
gobuster
to uncover hidden paths and resources.
- Conducts fast port scanning with
naabu
and performs detailed service version detection and basic vulnerability scanning withnmap
on identified open ports.
- Identifies potential URL parameters using
paramspider
to aid in further testing.
- Automatically takes screenshots of all live websites using
httpx
for quick visual assessment.
- Performs initial vulnerability scanning using
nuclei
with community-contributed templates.
- Saves all results in a structured directory for each domain, with sorted and deduplicated files for easy analysis.
- Linux-based system (e.g., Ubuntu, Debian, Kali Linux).
- Python 3 and
pip
installed. - Go (Golang) installed for Go-based tools (version 1.16+ recommended).
- Basic system packages:
git
,curl
,wget
,unzip
,dnsutils
. - Browser Drivers for
crawler.py
: If using the--enable-crawler
option, you will need Chrome or Firefox and their respective WebDriver installed and in your PATH.chrome-driver
orgeckodriver
.
-
Clone the AutoRecon repository:
git clone [https://github.com/00xmora/autorecon.git](https://github.com/00xmora/autorecon.git) cd autorecon
-
Install core dependencies: Most reconnaissance tools (
amass
,subfinder
,httpx
,nuclei
, etc.) are installed via the providedinstall.sh
script. Run this first:chmod +x install.sh ./install.sh
This script handles the installation of common tools and sets up basic paths.
-
Run
autorecon.py
: Theautorecon.py
script itself will handle the installation ofjslinks
andcrawler
(if--enable-crawler
is used) on its first run if they are not detected in your system's PATH. It will clone their respective repositories from GitHub, install Python dependencies, and create necessary symlinks in/usr/local/bin/
.
You can run AutoRecon using Docker to ensure a consistent environment without manually installing all dependencies.
Important Note for crawler.py
: If you intend to use the --enable-crawler
option, crawler.py
will attempt to launch a browser for manual login. This means the Docker container needs access to a display server (X server) if not running in --crawler-headless
mode. For most use cases within Docker, --crawler-headless
is recommended. The Dockerfile below includes necessary browser dependencies.
Navigate to the directory containing your autorecon.py
(and the Dockerfile
you've created from the snippet above) and build the image:
docker build -t autorecon .
When running the Docker container, you'll need to mount a local directory to store the reconnaissance results. autorecon.py
no longer uses a config.ini
file, as API key integration has been removed, and jslinks
and crawler
are self-installed by the script inside the container.
docker run -it --rm \
-v "$(pwd)/my_recon_data:/app/output" \
autorecon -n my_project -d example.com --all-recon --enable-crawler --crawler-headless
-it
: Runs the container in interactive mode and allocates a pseudo-TTY.--rm
: Automatically removes the container when it exits.-v "$(pwd)/my_recon_data:/app/output"
: Mounts a local directory (e.g.,my_recon_data
in your current working directory) to/app/output
inside the container. All output files will be saved here, allowing you to access them after the container finishes.- Note: Replace
my_recon_data
with your desired local directory name.autorecon
will create project directories inside this mounted volume.
- Note: Replace
autorecon -n my_project -d example.com --all-recon --enable-crawler --crawler-headless
: Theautorecon
command with your desired arguments.- If you enable
--enable-crawler
, it's highly recommended to also use--crawler-headless
for non-interactive Docker environments.
- If you enable
Example Docker Run:
To run a full reconnaissance on target.com
with dynamic crawling in headless mode and save results to a local recon_output
folder:
mkdir recon_output # Create the local directory first
docker run -it --rm \
-v "$(pwd)/recon_output:/app/output" \
autorecon -n target_scan -d target.com --all-recon --enable-crawler --crawler-headless
- Restart your terminal or run
source ~/.bashrc
(or~/.profile
) to ensure your PATH is updated and newly installed tools are found.
Run autorecon.py
with a project name and one or more domains. You can enable specific reconnaissance phases using the provided options, or run --all-recon
for a comprehensive scan.
./autorecon.py -n MyProject -d example.com example2.com
-n, --project-name <name>
: (Required) The name of the project directory where results will be saved.-d, --domains <domain1> [domain2 ...]
: One or more target domains to perform reconnaissance on.-w, --wordlist <path>
: Path to a custom wordlist for DNS enumeration (dnsrecon) and FFUF. Overrides the default Seclists wordlist.--crawl
: Enable URL discovery and crawling (waybackurls, katana, waymore, jslinks).-active
: Enable active subdomain enumeration (dnsrecon and ffuf).-r, --recursive
: Enable recursive JS endpoint extraction (used with--crawl
).-H, --header <"Header-Name: value">
: Custom headers for HTTP requests (e.g., for JS crawling or web content discovery). Can be specified multiple times.-t, --threads <num>
: Number of threads for concurrent execution of tools (default: 10).--all-recon
: Enable all reconnaissance phases: active enumeration, URL crawling, port scanning, web content discovery, parameter discovery, screenshots, JS analysis, and vulnerability scanning.--ports-scan
: Enable port and service enumeration withnaabu
andnmap
.--web-content-discovery
: Enable web content discovery (directory brute-forcing withgobuster
).--params-discovery
: Enable URL parameter discovery withparamspider
.--screenshots
: Enable taking screenshots of live websites withhttpx
.--js-analysis
: Enable analysis of JavaScript files for secrets and additional endpoints.--vuln-scan
: Enable basic vulnerability scanning withnuclei
.
--enable-crawler
: Enable dynamic crawling withcrawler.py
. Note: This requires manual login interaction in the opened browser window.--crawler-max-pages <num>
: Maximum number of pages forcrawler.py
to crawl (default: 10).--crawler-output-format <format>
: Output format forcrawler.py
(json
,txt
,csv
). AutoRecon primarily processes JSON internally.--crawler-headless
: Runcrawler.py
in headless browser mode (no GUI).
Results are saved in a structured directory for each domain within your specified project name:
MyProject/
├── [example.com/](https://example.com/)
│ ├── domains.txt # All discovered passive subdomains
│ ├── domain.live # Live/responsive subdomains
│ ├── ips.txt # Unique IPs resolved from live domains
│ ├── urls.txt # All discovered URLs (from crawling and JS analysis)
│ ├── js_endpoints.txt # URLs of JavaScript files found
│ ├── js_secrets.txt # Discovered secrets/sensitive data from JS files
│ ├── discovered_paths.txt # Paths found via web content discovery
│ ├── naabu_open_ports.txt # Open ports identified by naabu
│ ├── nmap_detailed_scan.xml # Detailed Nmap scan results (XML)
│ ├── discovered_parameters.txt # Discovered URL parameters
│ ├── nuclei_results.txt # Vulnerability scan results from Nuclei
│ └── screenshots/ # Directory containing website screenshots
└── [example2.com/](https://example2.com/)
└── ... (similar structure)
-
Run a full reconnaissance scan on
example.com
andtest.com
with custom headers:./autorecon.py -n MyFullScan -d example.com test.com --all-recon -H "User-Agent: MyReconTool/1.0"
-
Run passive subdomain enumeration and URL crawling with recursive JS analysis and a custom wordlist:
./autorecon.py -n MyProject -d target.com --crawl -r -w /opt/custom_wordlist.txt
-
Run dynamic crawling on
example.com
in headless mode, allowing for manual login:./autorecon.py -n DynamicCrawlTest -d example.com --enable-crawler --crawler-headless
Contributions are welcome! To contribute:
- Fork the repository.
- Create a new branch for your feature or bugfix.
- Commit your changes.
- Submit a pull request.
Thanks to the developers of the integrated tools: amass
, subfinder
, sublist3r
, httpx
, ffuf
, waybackurls
, katana
, waymore
, uro
, waybackrobots
, naabu
, nuclei
, gobuster
, paramspider
, SecretFinder
, jslinks, and crawler.
For questions, feedback, or support:
Enjoy using AutoRecon! 🚀