Kayran is capable of ingesting HAR files to perform deeper crawling activity against your web assets.
HAR – short for HTTP Archive, is a format used for tracking information between a web browser and a website, doing so, will “Assist” Kayran in performing a more efficient Scan.
To use it, upload it to your Storage and then select it before initiating a new scan.
Using Enumeration – by enabling it, Kayran will begin “Brute Forcing”, overloading the server and will start “Inserting” random Parameters and Paths for testing,
Pay attention – enabling Enumeration Significantly extends the Scan’s duration.
The “Level Deep” gauge responsible for determining the depth of the Scan in the website, or in other words, how many files and directories related to it will be scanned.
By using the “Single Scan” option, only the given page will be scanned, without taking into account other pages related to it, doing so, will disable the “Crawler”.
A Robots.txt file tells search engine crawlers which URLs the crawler can (or can’t) access on your site. To use it, upload it to your Storage and then select it before initiating a new scan.
A Sitemap is an .xml file that lists the URLs for a site. To use it, upload it to your Storage and then select it before initiating a new scan.
The ‘Scan Only CVEs‘ option will Set Kayran to only search for CVEs in the scanned target.
Check Stored Injection – enabling it will allow Kayran to search for possible Stored Injections between pages.