Friday, February 16th, 2007

Crawling Ajax Applications

Category: Articles, JavaScript, Ruby

Shreeraj Shah has published a paper on Crawling Ajax-driven Web 2.0 Applications.

Crawling web applications is one of the key phases of automated web application scanning. The objective of crawling is to collect all possible resources from the server in order to automate vulnerability detection on each of these resources. A resource that is overlooked during this discovery phase can mean a failure to detect some vulnerabilities.

The introduction of Ajax throws up new challenges for the crawling engine. New ways of handling the crawling process are required as a result of these challenges. The objective of this paper is to use a practical approach to address this issue using rbNarcissus, Watir and Ruby .

It really shows how powerful tools like Watir are.

Posted by Dion Almaer at 7:38 am
3 Comments

+++--
3.5 rating from 23 votes

3 Comments »

Comments feed TrackBack URI

Do the Search engine crawl through the Ajax Web application?

Comment by PohEe.com — February 16, 2007

I hesitated a lot to implement some ajax modules in my website Ajaxlines because this problem.

Comment by Ajaxlines — February 16, 2007

http://simile.mit.edu/repository/crowbar/trunk/README.txt

Comment by carmen — February 22, 2007

Leave a comment

You must be logged in to post a comment.