member sign-in
Forgot password? Create new account Close

Web Application Scanning

Definition

A Web Application Scanner is an automated security program that searches for software vulnerabilities within Web applications. A Web application scanner first crawls the entire website, analyzing in-depth each file it finds, and displaying the entire website structure. After this discovery stage, it performs an automatic audit for common security vulnerabilities by launching a series of Web attacks. Web application scanners check for vulnerabilities on the Web server, proxy server, Web application server and even on other Web services. Unlike source code scanners, web application scanners don't have access to the source code and therefore detect vulnerabilities by actually performing attacks.

In a vulnerability assessment we scan a web application to identify anything an attacker could potentially use against us (some assessments also look for compliance/configuration/standards issues, but the main goal in a VA is security). We can do this with a tool, service, or combination.
A web application vulnerability assessment is very different than a general vulnerability assessment where we focus on networks and hosts. In those we scan ports, connect to services, and use other techniques to gather information
revealing the patch levels, configurations, and potential exposures of our infrastructure. Even “standard” web applications are essentially custom; we need to dig a little deeper, examine application function and logic, and use more customized assessments to determine if a web application is vulnerable. With so much custom code and
implementation, we have to rely less on known patch levels and configurations, and more on actually banging away at
the application and testing attack pathways. Custom code means custom vulnerabilities.

 

User Benefits

Automated tools help the user making sure the whole website is properly crawled, and that no input or parameter is left unchecked.  Automated web vulnerability scanners also help in finding a high percentage of the technical vulnerabilities, and give you a very good overview of the website’s structure, and security status.  Thanks to automated scanners, you can have a better overview and understanding of the target website, which eases the manual penetration process.

 

Business Impact

Compliance: Like it or not, security controls are mandated by government regulation, industry standards & requirements, and contractual agreements. We like to break compliance into three separate justifications — industry or regulatory mandated controls (PCI web application security requirements), non-mandated controls that avoid other
compliance violations (data protection to avoid breach disclosure), and investments to reduce the costs of compliance
(lower audit costs or TCO). The average organization uses all three factors to gauge web application security
investments.
Fraud Reduction: Depending on your ability to accurately measure fraud, it can be a powerful driver and justification for
security investments. In some cases you can directly measure fraud rates and show how they can be reduced with
specific security investments. Keep in mind that you may not have the right infrastructure to detect and measure this
fraud in the first place, which might itself provide sufficient justification. Penetration tests are also useful for justifying
investments to reduce fraud — a test may show previously unknown avenues for exploitation that could be under active
attack or open to future attack. You can use the penetration test to estimate potential fraud and map that to security
controls to reduce losses to acceptable levels.
Cost Savings: As we mentioned in the compliance section, some web application security controls can reduce the cost of compliance (particularly audit costs), but there are additional opportunities for savings. Using web application security tools and processes through development and maintenance can reduce the need for and costs of manual processes or controls to remediate software defects and flaws, and may help create general efficiency improvements. We can also calculate cost savings from incident reduction, including incident response and recovery costs.
Availability: When dealing with web applications, we look at both total availability (uptime), and service availability (loss
of part of the application due to attack or to repair a defect). For example, while it’s somewhat rare to see a complete site outage due to a web application security issue (although it definitely happens), it’s not unusual to see an outage of a payment system or other functionality. We also see cases where due to active attack a site needs to shut down some of its own services to protect users, even if the attack didn’t break the services directly.
User Protection: While this isn’t quantifiable to a dollar amount, a major justification for investment in web security is to
protect users from being compromised by their trust in you (yes, this has reputation implications, but we cannot precisely
quantify them). Attackers frequently compromise trusted sites not to steal from that site, but to use it to attack the site’s
users. Even if you aren’t concerned with fraud resulting in direct losses to your organization, it’s a problem if your web
application is used to defraud your users. Most organization derive value or direct revenue from customer data, and there
is an implied custodial duty to protect the information you have gathered and use.
Reputation Protection: While many models attempt to quantify a company’s reputation and potential losses due to
reputation damage, the reality is all those models are bunk — there is no accurate way to measure the potential losses associated with a successful attack. Despite surveys indicating users switch to competitors if you lose their information, or that you’ll lose future business, real world reports show that user behavior rarely aligns with survey responses. For example, TJX was the largest retail breach notification in history, yet sales went up after the widely reported incident. But just because we can’t quantify reputation damage doesn’t mean it isn’t an important factor in justifying web application security. Just ask yourself (or management) how important that application is to the public image of your organization, and how willing you or they are to accept the risk of losses ranging from defacement to lost customer information to downtime. User, investor, and partner trust in your company and services is complicated, and while it does not track directly with site security, trust remains important to the overall value of the business.
Breach Notification Costs: Aside from fraud, we also see direct losses associated with breach notifications (if sensitive information is involved). Ignore all the fluffy reputation/lost business/market value estimates and focus on the hard dollar costs of making a list, sending out notifications, and staffing the call center for customer inquiries. You might also factor in the cost of credit monitoring, if you’d offer that to your customers.
You will know which combination of these works best for you based on your own organizational needs and management priorities, but the key takeaway should be that you likely need to mix quantitative and qualitative assessments to prioritize your investments. If you’re dealing with private information (financial/retail/healthcare), compliance drivers and breach notification mixed with cost savings are your best option. For general web services user protection & reputation, fraud reduction and availability are likely at the top of your list. And let’s not forget that many of these justifications are just as relevant for internal applications. Whatever your application, there is no shortage of business (as opposed to technical) reasons to invest in web application security.


Products supporting this technology

Acunetix Qualys

To secure a website or a web application, one has to first understand the target application, how it works and the scope behind it.  Ideally, the penetration tester should have some basic knowledge of programming and scripting languages, and also web security. 

A website security audit usually consists of two steps.  Most of the time, the first step usually is to launch an automated scan.  Afterwards, depending on the results and the website’s complexity, a manual penetration test follows.  To properly complete both the automated and manual audits, a number of tools are available, to simplify the process and make it efficient from the business point of view.  Automated tools help the user making sure the whole website is properly crawled, and that no input or parameter is left unchecked.  Automated web vulnerability scanners also help in finding a high percentage of the technical vulnerabilities, and give you a very good overview of the website’s structure, and security status.  Thanks to automated scanners, you can have a better overview and understanding of the target website, which eases the manual penetration process.

For the manual security audit, one should also have a number of tools to ease the process, such as tools to launch fuzzing tests, tools to edit HTTP requests and review HTTP responses, proxy to analyse the traffic and so on.

1. Manual Assessment of target website or web application

Securing a website or a web application with an automated web vulnerability scanner can be a straight forward and productive process, if all the necessary pre-scan tasks and procedures are taken care of.  Depending on the size and complexity of the web application structure, launching an automated web security scan with typical ‘out of the box’ settings, may lead to a number of false positives, waste of time and frustration.

Even though in recent year’s web vulnerability scanning technology has improved, a good web vulnerability scanner sometimes needs to be pre-configured.  Web vulnerability scanners are designed to scan a wide variety of complex custom made web applications.  Therefore most of the times, one would need to fine tune the scanner to his or her needs to achieve the desired correct scan results.

Before launching any kind of automated security scanning process, a manual assessment of the target website needs to be performed.  It is a well known fact that an automated scanner will scan every entry point in your website which most likely you tend to forget, and test it for a wide variety of vulnerabilities.
 
During the manual assessment, familiarize yourself with the website topology and architecture.  Keep record of the number of pages and files present in the website, and take record of the directory and file structure.  If you have access to the website’s root directory and source code, take your time to get to know it.  If not, you can manually hover the links throughout the website.  This process will help you understand the structure of the URL’s.  Also, take a note of all the submission and other type of online forms available on the website.

During the pre-automated scan manual assessment, apart from getting used to directory structures and number of files, get to know what web technology is used to develop the target website, e.g.  .NET or PHP.  There are a number of vulnerabilities which are specific for different types of technologies.  Other details you should lookout for when manually assessing a website are;

  • Does the website require client certificates to be accessed?
  • Is the target website using a backend database?  If yes, what type of database is it?
  • Is the database server running on the same server as the website?
  • Are all the sensitive records being encrypted?
  • Are there any URL parameters or URL rewrite rules being used for site navigation?
  • When a non existing URL is requested, does the web server return a HTTP Status Code 404, or does it return a custom error page and responds with a HTTP Status Code 200?
  • Are there any particular input forms or one time entry forms (such as CAPTCHA and Single Sign on forms) that need user input during an automated scan?
  • Are there any password protected sections in the website?

Once the manual assessment process is ready, you should know enough about the target website to help you determine if the website was properly crawled from the automated black box scanner before a scan is launched.  If the website is not crawled properly, i.e. the scanner is unable to crawl some parts or parameters from the website; the whole “securing the website” point is invalidated.  The manual assessment will help you go a long way towards heading off invalid scans and false positives.  It will also help you get more familiar with the website itself, and that’s the best way to help you configure the automated scanner to cover and check the entire website.

2. Get familiar with the software

Although many automated web vulnerability scanners have a comfortable GUI, if you are new to web security, you might get confused with the number of options and technical terms you’ll encounter when trying to use a black box scanner.  Though do not give up, it is not rocket science.  Commercial black box scanners are backed up by professional support departments, so make sure you use them.  You could also find a good amount of information and ‘how to’s’ about the product you are using online.  There are also a good number of open source solutions as well, but most of the time you have to dig deep and paddle on your own in rough waters to find support for such solutions.  Many commercial software companies are also using social networks to make it easier for you to get to know more about their product, how it works and best practices on how you should use it.
 
3. Configuring the automated black box scanner

Once you’re familiar with the automated black box scanner you will be using, and the target website or web application you will be scanning, it is time to get down to business and get your hands dirty.  To start off with, one must first configure the scanner.  The most crucial things you should configure in the scanner before launching any automated process are;

  • Custom 404 Pages – If the server returns HTTP status code 200 when a non existing URL is requested.
  • URL Rewrite rules – If the website is using search engine friendly URL’s, configure these rules to help the scanner understand the website structure so it can crawl it properly.
  • Login Sequences – If parts of the website are password protected and you would like the scanner to scan them, record a login sequence to train the scanner to automatically login to the password protected section, crawl it and scan it.
  • Mark page which need manual intervention – If the website contains pages which require the user to enter a one time value when accessed, such as CAPTCHA, mark these pages as pages which need manual intervention, so during the crawling process the scanner will automatically prompt you to enter such values.
  • Submission Forms – If you would like to use specific details each time a particular form is crawled from the scanner, configure the scanner with such details.  Nowadays scanners make it easy for you by populating the fields automatically.
  • Scanner Filters – Use the scanner filters to specify a file, or a file type, or directory which you would like to be excluded from the scan.  You can also exclude specific parameters.

4. Protect your data

From time to time I noticed people complaining that web vulnerability scanners are too invasive, therefore they opt not to run them against their website.  This is definitely a bad presumption and wrong solution, because if an automated web vulnerability scanner can break down your website, imagine what a malicious user can do.  The solution is to start securing your website and make sure it can handle properly an automated scan.
To start off with, automated web vulnerability scanners tend to perform invasive scans against the target website, since they try to input data which a website has not been designed to handle.  If the automated vulnerability scanner is not that invasive against a target website, then it is not really checking for all vulnerabilities and is not doing an in-depth security check.  Such security checks could and will lead to a number of unwanted results; such as deletion of database records, change a blog’s theme, a number of garbage posts placed on your forum, a huge number of emails in your mailbox, and even worse, a non functional website.  This is expected, because like a malicious user would do, the automated black box scanner will try its best to find security holes in your website, and tries to find ways and means how to get unauthorized access.

Therefore it is imperative that such scans are not launched against live servers.  Ideally a replica of the live environment should be created in a test lab, so if something goes wrong, only the replica is affected.  Though, if a test lab is not available, make sure you have latest backups.  If something goes wrong, the live website can be restored and be functional again in the shortest time possible.

5. Launching the scan

Once the manual website analysis is ready, and the black box scanner is configured, we are ready to launch the automated scan.  If time permits, you should first run a crawl of the website, so once the crawl is ready, you can confirm that all the files in the website and input parameters are crawled from the scanner.  Once you confirm that all the files are crawled, you can safely proceed with the automated scan.

6. After the scan – Analysing the results

Once the automated security scan is ready, you already have a good overview of your website’s security level.  Look into the details of every reported vulnerability and make sure you have all the required information to fix the vulnerability.  A typical black box scanner  will report a good amount of detail about the discovered vulnerability, such as the HTTP request and response headers, HTML response, a description of the vulnerability and a number of web links from where you can learn more about the vulnerability reported, and how to fix it.

Analysing the automated scan results in detail will also help you understand more the way the web application works and how the input parameters are used, thus giving you an idea of what type of tests to launch in the manual penetration test and which parameters to target.

7. Manual penetration test

There are a number of advantages in using a commercial black box security scanner.  Apart from benefitting from professional support and official documentation, it also includes a number of manual advanced penetration testing tools.  Having all the web penetration testing tools available in a centralized web security solution has the advantage that all the tools support importing and exporting of data from one to the other, which you will definitely need.  It also helps manually analyzing the scan results by exporting the automated scan results to the manual tools and further look into the issues.

As much as the automated scan, the manual penetration test is also a very important step in securing a website.   If the advanced manual penetration testing tools are used properly, they can ease the manual penetration test process and help you be more efficient.  The manual penetration testing helps you audit your website and check for logical vulnerabilities.  Even though automated scans can hint you of such vulnerabilities, and help you in pin pointing them out, most of them can only be discovered and verified manually. 

As we can see from the above, web security is very different from network security.  As a concept, network security can be simplified to “allow good guys in and block the bad guys.”  Web security is different; it is much more than that.  Though never give up.   There are tools available out there which will automate most of the job for you, assist you and make the whole process easier and faster.

  • manufacturer