Skip to content

Latest commit

 

History

History
57 lines (42 loc) · 1.88 KB

File metadata and controls

57 lines (42 loc) · 1.88 KB

WORKFLOW

  • get URLs first (hakrawler, gau, waybackurls, otxurls, burp crawl, gospider, cc.py, etc..)
    • or scan domain with JSScanner (you will now have JS files)
    • or getJS --input URLs.txt (or --url <url> --resolve --method <POST/PUT>
    • then scan each URL w/ linkfinder -o cli
    • jsubfinder -f <urls.txt> [or -u ] -s (search for secrets)
  • if you have domains.txt:

    cat domains.txt|httpx | bash ./script.sh (JSScanner)

  • or if probed domains.. (for each..) ./scripthunter.sh <probed.com>
  • subjs -i <urls.txt> -c 50 (for subdomains from JS files)

Relative-url-extractor

cat demo-file.js | ./extract.rb curl -s https://hackerone.com/hacktivity | ./extract.rb

JSFSScan

cat target.txt

https://site.com bash JSFSCan.sh -l target.txt --all -r -o jsfs-output

JSScanner

cat alive.txt

https://lol.com bash script.sh done, now you can scan js and db folders for secrets and links

JSHaunter

python3 main.py -u URL -n NAME

Scripthunter

./scripthunter.sh https://dev.verizon.com

Linkfinder

Most basic usage to find endpoints in an online JavaScript file and output the HTML results to results.html: python linkfinder.py -i https://example.com/1.js -o results.html

CLI/STDOUT output (doesn't use jsbeautifier, which makes it very fast): python linkfinder.py -i https://example.com/1.js -o cli

Analyzing an entire domain and its JS files: python linkfinder.py -i https://example.com -d

Burp input (select in target the files you want to save, right click, Save selected items, feed that file as input): python linkfinder.py -i burpfile -b

Enumerating an entire folder for JavaScript files, while looking for endpoints starting with /api/ and finally saving the results to results.html: python linkfinder.py -i 'Desktop/*.js' -r ^/api/ -o results.html

JSParser

host the repo on apache (php-based) python handler.py visit http://localhsot:8008