27 October 2014

Q&A with Brian O’Kelley (Appnexus)



We often talk about bidding and we think of a single ad space. What do you think about multi-bids ? Is there a way to implement it today ? How can we improve things ?

There is two issues here : one is multi-bids, the other is multi-tags. From the very beginning, we built AppNexus to support both multi-tags and mutli-bids. We always encouraged the industry to work this way. Even at Right Media, I was pushing at this internally. The reason why it is not doable yet is that it is complicated to implement and most actors have not put the effort to get to that level.

What would be the advantage for the industry to implement it ?

Quantity : every time you do not multi-bid, you are losing traffic. You are losing revenue. Inefficiency hurts everybody. Anything that make the system more efficient benefit to everyone. AppNexus has supported it for a long time but more people need to talk about it.

How do you define AppNexus. Are you a DSP, a SSP, an adexchange ? It is hard to define from the outside.

Our technology responds to different business models that are present in the market. We are a technology stack company that allow multiplier players in the industry to operate their business. We are unique. We are not involved in the media business. We are the technology inside the system just like Intel microchip in a computer or Mac.

Today we have 30 providers on the market, do you think it is too much because you have to connect to those different provider?

We are able to keep up with the evolution of market as we are a technology company. This is our mission to keep our technology up to speed with new technologies / inventories. For instance, Facebook will come up with something new. Twitter is launching its ad exchanges. We spent over 100 million dollar building our technology and It is why we have more than 250 engineers working together to take care of that part of the business for the industry.

Why Open RTB protocol is not working?

The issue is that everyone implements it differently. Let’s take a simple example of categorizing impression. If I send you an impression saying it is news, and someone else send you the same impression and says it is entertainment. What do you do? As a conclusion, it is not about protocol. I say it is fraud versus someone else might not consider this as not being fraud. Protocol does not mean that everyone will be doing RTB the same way. The truth is protocol is a language interpreted by your computers but the way it is used by player will change from player to player.

For instance, on mobile where Open RTB is more popular, everyone does it a little differently. It is really more about what is in the protocol and what is in the request. We are going to see standardisation and especially as RTB is becoming a commodity. As a matter in fact, AppNexus is commodity business. Anyone playing in this industry saying there are not a commodity business, there are dead. The only player that can compete with AppNexus is Google.

For instance, Apple can afford to build its own chips because it is one of the most valuable company in the world but they are not doing it. Same for Microsoft which could decide to do its own operating system and hardware. But at some point, you are as big as Microsoft. Think about that in our space, where companies are 100 times smaller. I am going to build a sales team, I am going to build proprietary data, I am going to build proprietary technology, I am going to build proprietary relationship with sellers. It is an insane strategy. Only Google can do that.

When we meet players, we ask this question : can you really keep up with what Google is building? Can you keep up with the 50 billion dollars of printing with searching? If not, you need to partner with AppNexus. It is why we are not going to have many players in the industry in a couple of years from now. the market will consolidate.

Google has strong synergy between its DSP and adserver. What is your plan on the adserving side of the business ? Is it an advantage in the business in your opinion ?

There is a different story on the buy side and the sell side.

What we are very strong as an adserving company is powering intermediaries. So we do not have a great agency adserver like Atlas and DFA and we did not focus on creating a great adserver for publisher like DFP. What we are build is a tools for players in the middle : Trading desk, DSP, SSP, and networks. For instance, hi-media and Wedads are using AppNexus as their primary adserver.

We deeply believe that your adserver has to be integrated with your exchange technology.

Today, we are not equivalent to DFA for instance as we do not have some features but we will develop those for our clients if they need it. As a result, we will compete more and more in those markets.

We almost do it accidentally and you can do it in the platform but it is not easy for now. But we will make it easy. It is not because we are trying to be an agency adserver. It is because agency are questioning why using two different technology.

We are discussing with Atlas and Media Mind to see how we can better integrate their adserver to our technology for ease of use for our clients. In the end, many client do not need both. That’s the future.

This is where Google has been very successful in integrating Invite with DFA and Admeld with DFP.

Google is weak to answer the needs of the players in the middle whereas AppNexus is strong in that area. For instance, if you are a publisher, DFP is the market bidder but if you are a network, then AppNexus is the market leader. For Orange for instance, they are both a network and a publisher. What platform makes more sense? From a publisher stand point, DFP is perfect but when you integrate the network it is not answering the need. Another example is audience extension. Why can’t I buy more reach for specific campaign? It is harder when you use a DFP.

To sum up, we are strong with intermediary players and it is an asset that we will keep investing onto.

During the Summit, you did not talk about video. How do you see the video eco-system? And how do you see AppNexus position on the market ?

The answer is focus. Meanwhile we are the largest ad technology company, we are relatively small compared to Google. While it is 5 billion dollars industry globally today, most of that is hand sold at high CPM to TV buyers. The non-garanteed volumes aside from Youtube is very small.

Look at Adap.tv, Tubemogul, it is actually a very thin market and it is why there not many players.

When we looked at mobile versus video, it turns out that mobile is 3 times bigger than video.

Last year, we expanded to Europe. This year we are investing on mobile and starting Asia-Pacific and next year we will focus on Asia Pacific and video.

What is your approach on tracking without cookies ? What is take for AppNexus ?

Today, we don’t finger print. We obide by user’s cookie settings because that is the best indication of their preferences. In this case, we might not be as competitive as others but we are very concerned about user’s privacy. Because it give no control to user on how their information is being used. I have 2 takes on this :

1. I agree with what Mike said on stage today : « We have a strong obligation to protect user’s privacy » and as an industry we have not been very clear about that.

2. I think the debate we are having in the us about Do Not Track is an indication of that. It is a sign that the industry is trying to not address the issue and push Do Not Track as an unused preference like clearing your cookies.

3. I think user should know exactly what data is being tracked.

I think we separate this issue in three points :

a. Publisher:  What information is used when I visit a website. The publisher can easy convey that message as a visit the site and give the opportunity to know and accept when information is used.

b. Advertiser: If I visited an advertiser’s site, as a user, I can accept to receive an ad from that site because I physically visited the site. I think it has to be disclosed and there should be control about it but it is fair the user because the users engaged with the site initially. It is a reasonable user experience.

c. Third party : What I do not like and I don’t feel comfortable is when there is a third party involved. For instance, I go to your site and I see an ad for FIAT because I have visited and car site before. I understand that it works but it arising questions on who’s tracking me and how.

For instance, I shop for a baby straddle for a baby shower. I get targeted with banners selling baby products. That arises a lot of questions. Who is tracking me? If is not the advertiser, it is not the publisher. Who is this then?

I think there is an easy solution. Let’s both from a policy and technology perspective stop creating profiles.  Google does it. Others does it. AppNexus for instance does not create profile and focus on first party data. I think it will not impact business at all. Maybe some companies will go out of business   but publisher will still use their data and advertiser will use their data. And in the end, user will know there are 3 options to ad showing.

1. Publisher
2. Advertiser
3. No data was used

That to me is a defendable solution to stand in front of a government. We could say : « We are doing something that consumer controls ». For instance, you do not want an ad from an advertiser opt-out.

I think this is the discussion to have instead of trying to replace cookie technology that will not solve the bottom line.

We will use technology ID if we have to but I do not want to. I would rather have a policy based decision that will be supported by the browsers that allow me to be a great advertising company without identifying users.

We have been talking to browser technology about a technology solution but we are not sure it will be used as a solution.

When we take a look at attribution, it’s key to differentiate a user who has seen a banner or not. When Appnexus will integrate the visibility in his platform ?

I am not completely confident that our view ability works. We tested everything in the market. We are not sure that anyone can conclusively say an ad was seen or not seen. Once we can say that our technology, we will include it into our bidding algorithm. We will introduce new pricing model that will say for instance Pay per viewed as apposed to pay per impression. We think about roughly viewed metrics. It is all about making sure that the technology works. I am expecting this from us and others to come. I do not think that the world of unseen ads make any sense. It will take some time because of technology and practices. We keep testing everyone that comes out because we want to find technology that works. Part of it is publisher participating. If a publisher say I will give you a clear yes or no, that will make out life easier. Trying to guess is not the solution. Probably until next year.

I totally agree with you Attribution only make sense on seen impressions.

One last question, is there a main innovation you would like to talk about?

I think TANGO and Mobile are really our main innovations and will keep discussing in upcoming summits.

Source: http://www.ad-exchange.fr
We often talk about bidding and we think of a single ad space. What do you think about multi-bids ? Is there a way to implement it today ? How can we improve things ?
There is two issues here : one is multi-bids, the other is multi-tags. From the very beginning, we built AppNexus to support both multi-tags and mutli-bids. We always encouraged the industry to work this way. Even at Right Media, I was pushing at this internally. The reason why it is not doable yet is that it is complicated to implement and most actors have not put the effort to get to that level.
What would be the advantage for the industry to implement it ?
Quantity : every time you do not multi-bid, you are losing traffic. You are losing revenue. Inefficiency hurts everybody. Anything that make the system more efficient benefit to everyone. AppNexus has supported it for a long time but more people need to talk about it.
How do you define AppNexus. Are you a DSP, a SSP, an adexchange ? It is hard to define from the outside.
Our technology responds to different business models that are present in the market. We are a technology stack company that allow multiplier players in the industry to operate their business. We are unique. We are not involved in the media business. We are the technology inside the system just like Intel microchip in a computer or Mac.
Today we have 30 providers on the market, do you think it is too much because you have to connect to those different provider?
We are able to keep up with the evolution of market as we are a technology company. This is our mission to keep our technology up to speed with new technologies / inventories. For instance, Facebook will come up with something new. Twitter is launching its ad exchanges. We spent over 100 million dollar building our technology and It is why we have more than 250 engineers working together to take care of that part of the business for the industry.
Why Open RTB protocol is not working? 
The issue is that everyone implements it differently. Let’s take a simple example of categorizing impression. If I send you an impression saying it is news, and someone else send you the same impression and says it is entertainment. What do you do? As a conclusion, it is not about protocol. I say it is fraud versus someone else might not consider this as not being fraud. Protocol does not mean that everyone will be doing RTB the same way. The truth is protocol is a language interpreted by your computers but the way it is used by player will change from player to player.
For instance, on mobile where Open RTB is more popular, everyone does it a little differently. It is really more about what is in the protocol and what is in the request. We are going to see standardisation and especially as RTB is becoming a commodity. As a matter in fact, AppNexus is commodity business. Anyone playing in this industry saying there are not a commodity business, there are dead. The only player that can compete with AppNexus is Google.
For instance, Apple can afford to build its own chips because it is one of the most valuable company in the world but they are not doing it. Same for Microsoft which could decide to do its own operating system and hardware. But at some point, you are as big as Microsoft. Think about that in our space, where companies are 100 times smaller. I am going to build a sales team, I am going to build proprietary data, I am going to build proprietary technology, I am going to build proprietary relationship with sellers. It is an insane strategy. Only Google can do that.
When we meet players, we ask this question : can you really keep up with what Google is building? Can you keep up with the 50 billion dollars of printing with searching? If not, you need to partner with AppNexus. It is why we are not going to have many players in the industry in a couple of years from now. the market will consolidate.
Google has strong synergy between its DSP and adserver. What is your plan on the adserving side of the business ? Is it an advantage in the business in your opinion ?
There is a different story on the buy side and the sell side.
What we are very strong as an adserving company is powering intermediaries. So we do not have a great agency adserver like Atlas and DFA and we did not focus on creating a great adserver for publisher like DFP. What we are build is a tools for players in the middle : Trading desk, DSP, SSP, and networks. For instance, hi-media and Wedads are using AppNexus as their primary adserver.
We deeply believe that your adserver has to be integrated with your exchange technology.
Today, we are not equivalent to DFA for instance as we do not have some features but we will develop those for our clients if they need it. As a result, we will compete more and more in those markets.
We almost do it accidentally and you can do it in the platform but it is not easy for now. But we will make it easy. It is not because we are trying to be an agency adserver. It is because agency are questioning why using two different technology.
We are discussing with Atlas and Media Mind to see how we can better integrate their adserver to our technology for ease of use for our clients. In the end, many client do not need both. That’s the future.
This is where Google has been very successful in integrating Invite with DFA and Admeld with DFP.
Google is weak to answer the needs of the players in the middle whereas AppNexus is strong in that area. For instance, if you are a publisher, DFP is the market bidder but if you are a network, then AppNexus is the market leader. For Orange for instance, they are both a network and a publisher. What platform makes more sense? From a publisher stand point, DFP is perfect but when you integrate the network it is not answering the need. Another example is audience extension. Why can’t I buy more reach for specific campaign? It is harder when you use a DFP.
To sum up, we are strong with intermediary players and it is an asset that we will keep investing onto.
During the Summit, you did not talk about video. How do you see the video eco-system? And how do you see AppNexus position on the market ?
The answer is focus. Meanwhile we are the largest ad technology company, we are relatively small compared to Google. While it is 5 billion dollars industry globally today, most of that is hand sold at high CPM to TV buyers. The non-garanteed volumes aside from Youtube is very small.
Look at Adap.tv, Tubemogul, it is actually a very thin market and it is why there not many players.
When we looked at mobile versus video, it turns out that mobile is 3 times bigger than video.
Last year, we expanded to Europe. This year we are investing on mobile and starting Asia-Pacific and next year we will focus on Asia Pacific and video.
What is your approach on tracking without cookies ? What is take for AppNexus ? 
Today, we don’t finger print. We obide by user’s cookie settings because that is the best indication of their preferences. In this case, we might not be as competitive as others but we are very concerned about user’s privacy. Because it give no control to user on how their information is being used. I have 2 takes on this :
1. I agree with what Mike said on stage today : « We have a strong obligation to protect user’s privacy » and as an industry we have not been very clear about that.
2. I think the debate we are having in the us about Do Not Track is an indication of that. It is a sign that the industry is trying to not address the issue and push Do Not Track as an unused preference like clearing your cookies.
3. I think user should know exactly what data is being tracked.
I think we separate this issue in three points :
a. Publisher:  What information is used when I visit a website. The publisher can easy convey that message as a visit the site and give the opportunity to know and accept when information is used.
b. Advertiser: If I visited an advertiser’s site, as a user, I can accept to receive an ad from that site because I physically visited the site. I think it has to be disclosed and there should be control about it but it is fair the user because the users engaged with the site initially. It is a reasonable user experience.
c. Third party : What I do not like and I don’t feel comfortable is when there is a third party involved. For instance, I go to your site and I see an ad for FIAT because I have visited and car site before. I understand that it works but it arising questions on who’s tracking me and how.
For instance, I shop for a baby straddle for a baby shower. I get targeted with banners selling baby products. That arises a lot of questions. Who is tracking me? If is not the advertiser, it is not the publisher. Who is this then?
I think there is an easy solution. Let’s both from a policy and technology perspective stop creating profiles.  Google does it. Others does it. AppNexus for instance does not create profile and focus on first party data. I think it will not impact business at all. Maybe some companies will go out of business   but publisher will still use their data and advertiser will use their data. And in the end, user will know there are 3 options to ad showing.
1. Publisher
2. Advertiser
3. No data was used
That to me is a defendable solution to stand in front of a government. We could say : « We are doing something that consumer controls ». For instance, you do not want an ad from an advertiser opt-out.
I think this is the discussion to have instead of trying to replace cookie technology that will not solve the bottom line.
We will use technology ID if we have to but I do not want to. I would rather have a policy based decision that will be supported by the browsers that allow me to be a great advertising company without identifying users.
We have been talking to browser technology about a technology solution but we are not sure it will be used as a solution.

When we take a look at attribution, it’s key to differentiate a user who has seen a banner or not. When Appnexus will integrate the visibility in his platform ? 
I am not completely confident that our view ability works. We tested everything in the market. We are not sure that anyone can conclusively say an ad was seen or not seen. Once we can say that our technology, we will include it into our bidding algorithm. We will introduce new pricing model that will say for instance Pay per viewed as apposed to pay per impression. We think about roughly viewed metrics. It is all about making sure that the technology works. I am expecting this from us and others to come. I do not think that the world of unseen ads make any sense. It will take some time because of technology and practices. We keep testing everyone that comes out because we want to find technology that works. Part of it is publisher participating. If a publisher say I will give you a clear yes or no, that will make out life easier. Trying to guess is not the solution. Probably until next year.
I totally agree with you Attribution only make sense on seen impressions.
One last question, is there a main innovation you would like to talk about?
I think TANGO and Mobile are really our main innovations and will keep discussing in upcoming summits.
Pierre Berendes
- See more at: http://www.ad-exchange.fr/qa-with-brian-okelley-appnexus-you-can-only-see-appnexus-as-what-is-inside-other-companies-business-model-like-intel-for-the-chips-4131/#sthash.bhQC4yQd.dpuf
9 October 2014

Top 11 Web Applicatoin Penetration Testing Tools





1. Arachni

Arachni is a feature-full, modular, high-performance Ruby framework aimed towards helping penetration testers and administrators evaluate the security of web applications.

Arachni is smart, it trains itself by learning from the HTTP responses it receives during the audit process.
Unlike other scanners, Arachni takes into account the dynamic nature of web applications and can detect changes caused while travelling
through the paths of a web application’s cyclomatic complexity.
This way attack/input vectors that would otherwise be undetectable by non-humans are seamlessly handled by Arachni.

Finally, Arachni yields great performance due to its asynchronous HTTP model (courtesy of Typhoeus).
Thus, you’ll only be limited by the responsiveness of the server under audit and your available bandwidth.

Note: Despite the fact that Arachni is mostly targeted towards web application security, it can easily be used for general purpose scraping, data-mining, etc with the addition of custom modules.

Sounds cool, right?

Features:

Helper audit methods:
For forms, links and cookies auditing.
A wide range of injection strings/input combinations.
Writing RFI, SQL injection, XSS etc modules is a matter of minutes if not seconds.

Currently available modules:
Audit:
SQL injection
Blind SQL injection using rDiff analysis
Blind SQL injection using timing attacks
CSRF detection
Code injection (PHP, Ruby, Python, JSP, ASP.NET)
Blind code injection using timing attacks (PHP, Ruby, Python, JSP, ASP.NET)
LDAP injection
Path traversal
Response splitting
OS command injection (*nix, Windows)
Blind OS command injection using timing attacks (*nix, Windows)
Remote file inclusion
Unvalidated redirects
XPath injection
Path XSS
URI XSS
XSS
XSS in event attributes of HTML elements
XSS in HTML tags
XSS in HTML ‘script’ tags

Recon:
Allowed HTTP methods
Back-up files
Common directories
Common files
HTTP PUT
Insufficient Transport Layer Protection for password forms
WebDAV detection
HTTP TRACE detection
Credit Card number disclosure
CVS/SVN user disclosure
Private IP address disclosure
Common backdoors
.htaccess LIMIT misconfiguration
Interesting responses
HTML object grepper
E-mail address disclosure
US Social Security Number disclosure
Forceful directory listing<

Download Here | Webiste here

Free, powerfull and monthly updated!

2. OWASP Zed Attack Proxy Project


The Zed Attack Proxy (ZAP) is an easy to use integrated penetration testing tool for finding vulnerabilities in web applications.

It is designed to be used by people with a wide range of security experience and as such is ideal for developers and functional testers who are new to penetration testing.

ZAP provides automated scanners as well as a set of tools that allow you to find security vulnerabilities manually.

Some of ZAP’s features:
Intercepting Proxy
Automated scanner
Passive scanner
Brute Force scanner
Spider
Fuzzer
Port scanner
Dynamic SSL certificates
API
Beanshell integration

Some of ZAP’s characteristics:
Easy to install (just requires java 1.6)
Ease of use a priority
Comprehensive help pages
Fully internationalized
Under active development
Open source
Free (no paid for ‘Pro’ version)
Cross platform
Involvement actively encouraged

Download Here | Webiste here

3. w3af

w3af is a Web Application Attack and Audit Framework. The project’s goal is to create a framework to find and exploit web application vulnerabilities that is easy to use and extend. To read our short and long term objectives, please click over the Project Objectives item in the main menu. This project is currently hosted at SourceForge , for further information, you may also want to visit w3af SourceForge project page .
The guys from backtrack (well it has connections with metasploit) included this awesome tool in their latest release.

This is only a small list of plugins that are available in w3af, you should really check out this tool.

Audit:
xsrf
htaccessMethods
sqli
sslCertificate
fileUpload
mxInjection
generic
localFileInclude
unSSL
xpath
osCommanding
remoteFileInclude
dav
ssi
eval
buffOverflow
xss
xst
blindSqli
formatString
preg_replace
globalRedirect
LDAPi
phishingVector
responseSplitting

Download here | Project here

4. Vega

Vega is an open source platform to test the security of web applications. Vega can help you find and validate SQL Injections, Cross-Site Scripting (XSS), inadvertently disclosed sensitive information, and other vulnerabilities. It is written in Java, GUI based, and runs on Linux, OS X, and Windows.

Vega includes an automated scanner for quick tests and an intercepting proxy for tactical inspection. Vega can be extended using a powerful API in the language of the web: Javascript.

Vega was developed by Subgraph in Montreal.

Modules:
Cross Site Scripting (XSS)
SQL Injection
Directory Traversal
URL Injection
Error Detection
File Uploads
Sensitive Data Discovery

Core:
Automated Crawler and Vulnerability Scanner
Consistent UI
Website Crawler
Intercepting Proxy
SSL MITM
Content Analysis
Extensibility through a Powerful Javascript Module API
Customizable alerts
Database and Shared Data Model

Download here | Website here

5. Acunetix

You heard about this program so many times. Is it good? Well you can download the free edition and test it.
Acunetix WVS automatically checks your web applications for SQL Injection, XSS & other web vulnerabilities.

HTTP Editor – Construct HTTP/HTTPS requests and analyze the web server response.
HTTP Sniffer – Intercept, log and modify all HTTP/HTTPS traffic and reveal all data sent by a web application.
HTTP Fuzzer – Perform sophisticated fuzzing tests to test web applications input validation and handling of
unexpected and invalid random data. Test thousands of input parameters with the easy to use rule builder of
the HTTP Fuzzer. Tests that would have taken days to perform manually can now be done in minutes.
Script your own custom web vulnerability attacks with the WVS Scripting tool. A scripting SDK documentation
is available from the Acunetix website.
Blind SQL Injector – An automated database data extraction tool that is ideal for penetration testers who wish to make further tests manually

Download here | Website here

This tool has a free version (the above link) but also an advance version (paid)

6. Skipfish

Skipfish is an active web application security reconnaissance tool. It prepares an interactive sitemap for the targeted site by carrying out a recursive crawl and dictionary-based probes. The resulting map is then annotated with the output from a number of active (but hopefully non-disruptive) security checks. The final report generated by the tool is meant to serve as a foundation for professional web application security assessments.

High risk flaws (potentially leading to system compromise):
Server-side SQL / PHP injection (including blind vectors, numerical parameters).
Explicit SQL-like syntax in GET or POST parameters.
Server-side shell command injection (including blind vectors).
Server-side XML / XPath injection (including blind vectors).
Format string vulnerabilities.
Integer overflow vulnerabilities.
Locations accepting HTTP PUT.
Medium risk flaws (potentially leading to data compromise):

Stored and reflected XSS vectors in document body (minimal JS XSS support present).
Stored and reflected XSS vectors via HTTP redirects.
Stored and reflected XSS vectors via HTTP header splitting.
Directory traversal / file inclusion (including constrained vectors).
Assorted file POIs (server-side sources, configs, etc).
Attacker-supplied script and CSS inclusion vectors (stored and reflected).
External untrusted script and CSS inclusion vectors.
Mixed content problems on script and CSS resources (optional).
Password forms submitting from or to non-SSL pages (optional).
Incorrect or missing MIME types on renderables.
Generic MIME types on renderables.
Incorrect or missing charsets on renderables.
Conflicting MIME / charset info on renderables.
Bad caching directives on cookie setting responses.

Download here | Project here

7. Websecurify

Websecurify is an integrated web security testing environment, which can be used to identify web vulnerabilities by using advanced browser automation, discovery and fuzzing technologies. The platform is designed to perform automated as well as manual vulnerability tests and it is constantly improved and fine-tuned by a team of world class web application security penetration testers and the feedback from an active open source community.

The built-in vulnerability scanner and analyzation engine are capable of automatically detecting many types of web application vulnerabilities as you proceed with the penetration test. The list of automatically detected vulnerabilities include:

SQL Injection
Local and Remote File Include
Cross-site Scripting
Cross-site Request Forgery
Information Disclosure Problems
Session Security Problems
many others including all categories in the OWASP TOP 10

Download here | Project here

8. Burp

Burp Suite is an integrated platform for performing security testing of web applications. Its various tools work seamlessly together to support the entire testing process, from initial mapping and analysis of an application’s attack surface, through to finding and exploiting security vulnerabilities.

Burp Suite contains the following key components:

An intercepting proxy, which lets you inspect and modify traffic between your browser and the target application.
An application-aware spider, for crawling content and functionality.
An advanced web application scanner, for automating the detection of numerous types of vulnerability.
An intruder tool, for performing powerful customized attacks to find and exploit unusual vulnerabilities.
A repeater tool, for manipulating and resending individual requests.
A sequencer tool, for testing the randomness of session tokens.
The ability to save your work and resume working later.
Extensibility, allowing you to easily write your own plugins, to perform complex and highly customized tasks within Burp.

Download here | Webiste here

Free and paid editions are available.

9. Netsparker

Netsparker will try lots of different things to confirm identified issues. If it can’t confirm it and if it requires manual inspection, it’ll inform you about a potential issue generally prefixed as [Possible], but if it’s confirmed, that’s it. It’s a vulnerability. You can trust it.

Netsparker confirms vulnerabilities by exploiting them in a safe manner. If a vulnerability is successfully exploited it can’t be a false-positive. Exploitation is carried out in a non-destructive way.

SQL Injection
XSS (Cross-site Scripting)
XSS (Cross-site Scripting) via Remote File Injection
XSS (Cross-site Scripting) in URLs
Local File Inclusions & Arbitrary File Reading
Remote File Inclusions
Remote Code Injection / Evaluation
OS Level Command Injection
CRLF / HTTP Header Injection / Response Splitting
Find Backup Files
Crossdomain.xml Analysis
Finds and Analyse Potential Issues in Robots.txt
Finds and Analyse Google Sitemap Files
Detect TRACE / TRACK Method Support
Detect ASP.NET Debugging
Detect ASP.NET Trace
Checks for CVS, GIT and SVN Information and Source Code Disclosure Issues
Finds PHPInfo() pages and PHPInfo() disclosure in other pages
Finds Apache Server-Status and Apache Server-Info pages
Find Hidden Resources
Basic Authentication over HTTP
Password Transmitted over HTTP
Password Form Served over HTTP
Source Code Disclosure
Auto Complete Enabled
ASP.NET ViewState Analysis
ViewState is not Signed
ViewState is not Encrypted
E-mail Address Disclosure
Internal IP Disclosure
Cookies are not marked as Secure
Cookies are not marked as HTTPOnly
Directory Listing
Stack Trace Disclosure
Version Disclosure
Access Denied Resources
Internal Path Disclosure
Programming Error Messages
Database Error Messages

Request a trial here | Website here

10. WebSurgery

WebSurgery is a suite of tools for security testing of web applications. It was designed for security auditors to help them with the web application planning and exploitation. Currently, it uses an efficient, fast and stable Web Crawler, File/Dir Bruteforcer and Fuzzer for advanced exploitation of known and unusual vulnerabilities such as SQL Injections, Cross site scripting (XSS), brute-force for login forms, identification of firewall-filtered rules etc.

Download here | Webiste here

11. IBM Rational AppScan

Rational AppScan has 8 versions. Yes. 8. Source, Standard, Enterprise, Reporting Console, Build, Tester Express, OnDemand. Don’t think that its the last on my list its the worst web app scanner. (Reporting Console is just a reporting console so that makes it only 7 versions.

Here is what they are saying:


IBM Rational AppScan is an industry leading web application security testing tool that scans and tests for all common web application vulnerabilities – including those identified in the WASC threat classification – such as SQL-Injection, Cross-site Scripting and Buffer Overflow.
Provides broad application coverage, including Web 2.0/Ajax applications
Generates advanced remediation capabilities including a comprehensive task list to ease vulnerability remediation
Simplifies security testing for non-security professionals by building scanning intelligence directly into the application
Features over 40 out-of-the-box compliance reports including PCI Data Security Standards, ISO 17799, ISO 27001, Basel II, SB 1386 and PABP (Payment Application Best Practices)
Support for next generation Web applications including the ability to scan complex Java and Adobe Flash-based sights for both traditional Web vulnerabilities as well as technology specific threats such as Cross-site Flashing threats
Enhanced support for Web Services with the ability to interact with Mega Script, Encoded URLs, and Web Portals utilizing widget-based pages
Simplified scan results through the new Results Expert wizard, further simplifying the process of interpreting scan results through scan-specific descriptions and straight forward explanations of each issue
Other Enhancements including IPv6 support, expanded language support, new scan templates, and performance improvements

Download a trial here (requires a site account) | Website here

Well this is top 11 list of web application penetration testing tools. It has 11 items but the last one is a bit expensive.

If I forgot one please do comment.

This article first appeared on http://www.lo0.ro.

Ways to Identify a PPC Network that will Protect You from Click Fraud


Sometimes even the most reputable PPC search engines face the problem of fraudulent clicks. They put quite an effort into protecting their clients from click scam. Take Google, for example. About a month ago it acquired spider.io with the aim of improving efficiency of Google advertising. Spider.io is a quite successful fraud- fighting company: last year it discovered Chameleon – the botnet that caused 6.2 million advertising dollars to be wasted on bot clicks. Monthly.

Advertisers are the ones who suffer the most in this case. As networks get their profit even with malicious publisher websites, some of them don’t want to invest their time into fighting against click fraud. So how can you recognize the network that actually cares?

There are a few features that distinguish a quality network. A reliable network would apply third party fraud protection solutions (or it has its own technology to fight invalid clicks), and it would implement techniques like IP filtering and URL blocking to minimize the risk of fraud clicks slipping through. More on these in the next paragraphs.


Overview of major click fraud protection systems

Here is the list of the most popular fraud prevention systems. If a CPC network features some of these, that’s a good reason to start advertising with them. By the way, in case you are a big-sized publisher or an advertising agency running numerous campaigns, you may yourself apply a fraud reporting solution. This way you’ll have a chance to apply for a refund with your network, if you have a proof that their clicks are fraudulent.


Some networks have the experience and the expertise to develop their own solutions fighting click scam. The example is AdOn Safeguard developed by AdOn Network. In addition to Fraudlogix and Adometry, the network applies 17-point inspection for each advertising campaign that they run.

Other fraud prevention techniques

In addition to applying third-party controls, networks themselves may implement tools that can contribute to the traffic quality. Try to find a pay per click network that implements blocking and filtering controls like URL filtering, IP blocking and visitor targeting based on traffic parameters.

IP blocking implies that you may apply blocklists of IPs for which malicious traffic has been detected. Many of these blocklists are available for free.

URL/Source blocking: If you have doubts about clicks coming from particular URLs or sources (a number of URLs provided by a single publisher), there is an option to block them in some PPC Networks.

Visitor targeting based on traffic parameters allows paying exclusively for the clicks that brought visitors that feature the preset parameters.Like, you may only pay for visitors that view at least two pages on your website. Although some robots are able to emulate human behavior, filtering visitors according to traffic parameters still makes it possible to keep away a lot of bots.

To sum up

Ideally, the network’s website should contain all the information about click fraud protection that you require. Do not hesitate to turn to the support team, if something remains unclear. Also check what requirements the network sets for its publishers.

It never kills to spend a few hours looking for a reliable network. In the end you’ll be rewarded with a fraud-free advertising solution that is cost-effective and impactful.

About the Author

This article is written by Tory Woods - She works as an account manager for REACH Network. She is keen on PPC advertising and online marketing.
8 October 2014

Click Fraud Botnet Defrauds Advertisers Up to $6 Million



An advertising analytics company said it has discovered a botnet that generates upwards of US$6 million per month by generating bogus clicks on display advertisements.

Data integration is often underestimated and poorly implemented, taking time and resources. Yet it

Spider.io, based in the U.K., wrote that the botnet code, called Chameleon, has infected about 120,000 residential computers in the U.S. and perpetrates click fraud on 202 websites that collectively deliver 14 billion ad impressions. Chameleon is responsible for 9 billion of those impressions, Spider.io said.

Click fraud cheats Web advertisers by making them pay for clicks on ads that are not legitimate, depriving them of customers and revenue. Spider.io said advertisers pay an average of $0.69 per one thousand impressions.

Spider.io did not identify the publishers of the websites that the botnet targets. But online media buyers have been noticing inconsistencies for some time on websites showing display ads for large companies. Andrew Pancer, chief operating officer of Media6Degrees in New York, said his company stopped buying ad inventory on thousands of sites last year.

The blacklisted sites reported very high traffic numbers even though some would not even turn up in a search, said Pancer, whose agency buys ads for companies including AT&T, HP and CVS Pharmacy.

"You've never heard of these sites," said Pancer, who said many of the sites share the same cookie-cutter templates.

Media6Degrees shared its findings with Spider.io, which then discovered a botnet it calls "Chameleon." The botnet is engineered to visit multiple pages on multiple websites at a time, clicking on ads the way a real person would. But despite at times looking like unique traffic, Spider.io wrote that the botnet traffic as a whole looks homogenous.

"All the bot browsers report themselves as being Internet Explorer 9.0 running on Windows 7," Spider.io wrote on its blog.

Chameleon puts a heavy load on a user's browser and can cause a browser to crash and restart. If it crashes the browser, Chameleon restarts another session.

Media6Degrees stopped buying inventory through companies such as Alphabird due to concerns over the source of their traffic, Pancer said.

Willie Pang, Alphabird's managing director for Asia-Pacific, said the company has immediately stopped the practice of "buying" traffic, or sourcing web site visitors from other companies, due to Spider.io's findings.

"It's a pretty serious issue, and it's not a new one for folks in our space," Pang said. "Our view on this is we're as much of a victim and surprised by the kind of data we are getting back."

Most of the websites run by Alphabird have fairly stable traffic, but a spike in traffic is a clue that something may be amiss, Pang said. Alphabird is working with Spider.io and Adometry, another online advertising analytics company in Austin, Texas, to review the concerns, he said. Spider.io CEO Douglas de Jager contested that claim and said Spider.io is not working with Alphabird.

Pancer said some publishers may have inadvertently partnered with questionable agencies to supply poor quality traffic to their sites. He said it is still early days for ad exchanges, which are highly automated and have a "wide margin for gaming the system."

"I'm so happy we are finally able to get in front of this," he said.



Related: Botnet clicking AdSense ads revealed


About the Author

Send news tips and comments to jeremy_kirk@idg.com. Follow author on Twitter: @jeremy_kirk
11 September 2014

Best PPC Networks List


 Choosing the right PPC network is very important task for your business or for your clients. But in these times there exist so many PPC networks and is really hard to choose the right one. Read through my biggest PPC networks list and choose the right network that will skyrocket your business and also your earnings.

In my opinion choosing the right PPC network will determine if you will be successful with your campaigns or not, but how you can choose the right one if you know only 1 or 2 PPC networks ? Well I prepared huge or maybe biggest PPC networks list that you will find on the internet.

I will list here all PPC networks for english based countries and english language, if you will have any PPC network that is not listed here, make sure that you will contact me or that you will add missed network into comments area.
PPC Networks from the biggest to the smallest ..

Google Adwords [link]

Google Adwords is biggest PPC network avaible on the internet, it supports almost every language and country and it has very huge database of publishers. In other words PPC network with best converting traffic but also it’s very expensive compare to another networks.

MSN Adcenter [link]

Another alternative of Google Adwords, it has also very powerful database of publishers but compare to Google AdSense & Google Adwords, it’s really small. This traffic also converts very well and it’s cheaper than Adwords.

Facebook Advertising [link]

Honestly Facebook ads are very cheap compare to Google Adwords, but you can easily target larger audience than any other PPC network that is avaible. Also it’s a little bit harder to actually convert facebook traffic, but with average price of each click – it’s really worth it to try this kind of advertising.

7Search [link]

7Search has better ROI than either Google or Yahoo, it has also cheaper clicks. No minimum bids, but also there you can target smaller audience and also limited countries avaible to target.

Clicksor [link]

Clicksor has really huge history of changing, anyway now at the end of 2013 – Clicksor belongs to one of the top PPC networks avaible. I really recommend you to check it out.

AdClickMedia [link]

With AdClickMedia pay-per-click network, you can advertise on more than 40 000 quality publisher’s websites within minutes, using Photo text ads, Banner ads, Full page Interstitial ads, and Email PPC ads.

BidVertiser [link]

Very cheap and large pay per click network. It is really easy to start receiving new targeted and easy-to-convert traffic that will help to skyrocket your business in no time.

Plentyoffish Advertising [link]

It is really easy to target local consumers based on zip code, age, gender, education, profession etc.. so you will recieve very targeted traffic that will convert for sure.

BuySellAds [link]

With BuySellAds you are able to buy advertising space on other websites in exchange of small fee. It has really powerful and huge network, so you can target almost any audience you want.

Other PPC Networks

Above you will find the most important PPC networks (for me, it is just my opinion), but don’t be sad .. below you will find another PPC networks that will help you to run your online advertising. But as I said, my opinion is that the largest and most important PPC networks are listed above.

directCPV [link]
DojoAd [link]
Infolinks [link]
AdSonar [link]
AdSide [link]
Advertise.com [link]
Marchex [link]
AdMarketplace [link]
Valueclick [link]
Google Mobile Advertising [link]
Chitika [link]
Kontera [link]
Inlinks [link]

If you want to list any PPC network that is not already listed here, feel free to contact me anytime and I will add it. Or you can easily leave a comment below and I will add it. Also share your tips and thoughts.

Source: http://chymcakmilan.com/
9 September 2014

How to Block Traffic from Any Sites Using .htaccess




This will send all traffic seen as a referral from example.com to a 403/forbidden page .
If your concern is for your own website (most people reading this), then you’ll want to edit the main .htaccess for your site. This would be in your /home/username/public_html/ directory (some hosts rename public_html to httpdocs, www, web, etc..).

You may want to block traffic from particular sites that link to your site. You can perform HTTP referrer-based blocking using mod_rewrite.

Suppose you would like to block (with a 403 Forbidden code) traffic referred by badguys.com. Add the lines to .htaccess:

RewriteEngine on
RewriteCond %{HTTP_REFERER} ^http://(www\.)?badguys\.com [NC]
RewriteRule (.*) - [F]


You can block multiple domains using multiple RewriteCond lines and the [OR] flag, as follows:

RewriteEngine on
RewriteCond %{HTTP_REFERER} ^http://(www\.)?badguys\.com [NC,OR]
RewriteCond %{HTTP_REFERER} ^http://(www\.)?badreferrers\.com [NC]
RewriteRule (.*) - [F]


NB: Be sure to add [OR], as shown, to each RewriteCond line but the last. The default behavior is a logical AND, i.e. the RewriteRule takes effect only if all of the RewriteCond lines apply, which you do not want here.

You also can remove them from Google analytic -

1. Log into Google Analytic
2. Click ‘Admin’ at the top
3. Pick the property you want to work with
4. In the middle column (Property) click on Tracking Info
5. Click ‘Referral Exclusion List’
6. Add badguys.com,kambasoft.com, semalt.com, etc..
7. Save
7 April 2014

How to Get Your Newly Created Website to Over 100,000 Organic Visitors in a Month



This is a case study on how I built a website that receives over 100,000 visitors per month, in less than 1 year, without spending $1 on advertising.

This was done 100% through SEO and content strategy.

Before we dive in, allow me to clarify a few things:
  1. The website reached over 100,000 visitors in 9 months.
  2. This was a new domain, registered just a couple months before launch.
  3. This was done in a language I do not read nor speak (Japanese).
  4. Japanese is a non-roman character language, making it nearly impossible to use most of the popular SEO tools.
The purpose of this post is to walk you through precisely how my team and I reached this milestone, the approach we took, and show how technical SEO combined with content strategy can deliver serious results.

Key Drivers of Traffic Growth


There were a few key elements that led to the widespread and sustained growth of the project, these range from commonsense to technical, but come down to three main focus areas:

Math - we took a mathematical approach to designing an evaluation model that would allow us to gauge opportunities based on their potential returns. Ultimately this led to the creation of what we now call our keyword opportunity evaluation, which is a financial model that measures the approximate output (traffic) based on a finite set of inputs, including elements like average DA, number of links / linking domains, age of site, content footprint, etc.

Analysis – Using our newly built algorithm we got to testing, creating websites to test content patterns and architecture. We were quick to declare defeat within verticals without traction, and paid close attention to where the traffic was growing the most. The algorithm started to take shape and after roughly 3 months was able to identify within an order of magnitude the amount of traffic we could acquire for a given set of costs.

Pumpkin Hacking – This is a term that I came across (thank you Peter Da Vanzo) that seems to describe exactly what we did to continue to grow our traffic by double and even triple digits, month after month. The core concept is simple; focus resources on building what works. What this meant for us was paying attention to the search verticals and content that received the most traffic, most comments, most social shares, and being quick to cut the cord on traffic that didn’t perform.

First Month After Launch


Click to Enlarge

With zero promotion and no advertising, we had a decent first month, bringing in over 2,000 visitors. This was mostly due to our pre-launch strategy – which I’ll explain more later in this post.

Nine Months After Launch

Click to Enlarge
After only 9 months we were 3 months ahead of schedule to pass 100,000 visitors with no signs of slowing down.

Traffic Sources


Click to Enlarge

As you can see in the screenshot above, organic search drives the most significant portion of our traffic. Referral traffic is almost entirely from blogs and industry publications, and campaigns is representative of the ads that we place, only on our website, to test different language and call to actions to drive conversions.

Building a Keyword Database

This is an obvious no-brainer for all SEO’s, however, unlike most search campaigns – this was a big keyword database, to the tune of 50,000 keywords.

The main idea here was leave no stone un-turned. Since we were of the mind to test everything and let the performance metrics dictate where to allocate resources, we had to get creative with query combinations.

We first went through all of our target search verticals, as dictated by our chosen go-to-market categories, which I think was roughly 19 to start. The next step was to identify the top 100 highest search volume terms within those verticals and scrape the top 100 URL’s that were currently ranking.

From here we began what started out as an exhaustive process of evaluating the opportunities for each keyword, and then aggregating opportunities to discern which categories we needed to focus on to grow traffic.

Essentially we targeted the low-hanging fruit; keywords identified by our model that could generate a minimum level of traffic in 3 months or less, with a minimum investment in content development.

I watched (obsessively) which phrases and topics generated the most traffic.

As soon as a topic began to grow legs, we would focus additional keyword research on finding concepts and phrases that were both complimentary and contextually relevant.

Designing a Content Strategy

This is the single hardest part of any content-focused website or project.

The key to success on this particular project was taking a page out of Jeff Bezos’ book, and becoming obsessed with our customers.

We not only embarked on an aggressive a/b testing schedule, but we constantly reached out to our users for feedback.

We asked tough questions, ranging from what users’ liked and disliked (colors, fonts, and layouts) but also the specific components of the website they found to be less than ideal or even ‘sub-par.’

We took the responses seriously, making changes as they came in, trying to take something constructive from every piece of feedback, and pushing as many as 10 deployments a week.

It started to work.

Once we saw the needle begin to move on our user engagement metrics; time on site, pages per visit, and direct or branded traffic, we moved onto the next phase of our strategy; analyzing our audience.

Targeting the right audience is so much harder than it sounds.

I can honestly say from the experience of working on this project it is almost never as it seems. We began with targeting a very large segment of users (remember that time I talked about a keyword database of over 50,000 keywords?) but after a few months it turned out our largest (and most active) users were finding us from only a handful of targeted categories.

Information Architecture with SEO in Mind

Please allow me to preface this by saying that I am bias; in my opinion the architecture of a website is critical to achieving SEO success.

My largest successful SEO projects have come due to a variety of factors, but tend to come down to 3 core components of architecture:
  • It’s Scalable
  • It’s Crawlable
  • It’s Tiered
Scalable architecture is an obvious one; you need a system that can grow as large as you want/need it to.

Crawlable is nothing new to anyone in SEO; this simply means that the structure of our pages allowed for all of the most important content to quickly and easily be crawled and indexed by search engine robots. It actually sounds easier than it is… ensuring that the content is rendered (code wise) in the most ideal format for robots to parse takes more consideration than just laying out your div’s to properly render your designs.

To do this properly you need to make sure all of your code is in the right place, and more so, check how each crawler sees your page.

Take every opportunity to DRY out your code as much as possible, remember modern code is designed to cascade for a reason.

Information tiering… is a concept I have long-time preached to anyone who has ever talked with me, at length, about SEO. It means that your URL architecture should be built in a way so authority flows upwards through your directories.

For example, if I wanted to build authority around a core concept, I would focus my domain on that concept. If I then wanted to build relevance around specific locations for that concept, I would structure my URL’s so that all relevant content for that location fed upwards to a location specific directory.

So let’s say I had an SEO consulting firm with locations in several cities across the U.S., I would design an architecture that would allow for location-specific information to feed upwards through my directories.

So something like NicksSEOFirm.com/Philadelphia/Specific-Location-Content. The specific location content could be the team, any value-add competencies, anything geo-specific that was relevant to operations at that location, flowing relational authority upwards to the parent directory of /Philadelphia/.

Link in sub-directories can feed authority to parent directories.

A perfect example of this is local sitelinks for popular categories; tertiary directories with the most links and content which cause their upstream sub-directories to receive authority translating into higher rankings and local sitelinks.

Click to Enlarge

Launch Before The Launch

The easiest way to ensure a successful product or website launch is to launch before you actually launch.

What I mean is to build your prospect list well in advance of pulling the trigger to go live.

John Doherty wrote a great post on ProBlogger that talks about the power of leveraging list-building pre-launch pages. By building a list of users before publishing your full website you are essentially guaranteeing traffic immediately upon launch.

Our pre-launch is how we were able to generate over 2,000 visitors within the first 30 days of taking the website live.

Since our platform is not built on WordPress we didn’t get to use any of the fancy plugins available, and instead created a basic one-page site that allowed visitors to convert the same way the full website would support, just on a much smaller scale.

The most important part of our pre-launch page was that it not only supported social sharing, but was able to track and aggregate shares to give active users more points; gamification is cool.

Some of the major benefits of a well planned pre-launch are:
  • Your website is already being crawled and indexed by major search engines.
  • You begin building your user base and audience.
  • You can gain invaluable feedback while it’s still easy to make changes.
Choosing a Platform

Let me start by saying not all platforms are created equal.

It’s also worth sharing that it is not always better to build versus buy, as there are a lot of smart people building a lot of slick content platforms.

However, we chose to build.

Once we had laid out all of the project requirements, including URL architecture, conversion funnels, user permissioning, design templating, and localization, it became clear that in order to get exactly what we needed – we were going to have to build it ourselves.

One major benefit of building is we were able to design a system that would support both our internal and external processes right out of the gate. This also meant it was going to take a lot more time and a shitload more money to bring our website to market.

Hosting & Evolution

This is a known but rarely talked about factor – hosting infrastructure is critical.

Once we were ready for public launch we setup chose a reasonably affordable VPS provider with what seemed like more than enough memory, and it was at first.

By month 4 it was clear we were going to have to make some changes; load times began to bloat and large content pages were timing out. We beefed up the space and quadrupled the memory, which solved the problem temporarily until…

We got some press.
Click to Enlarge
On June 5th we were featured by one of the largest news publications in the world. We were able to handle almost 40,000 visits before out VPS crashed, hard.

It was that week we made the move to localized cloud hosting from Amazon Web Services.

We haven’t crashed since.

The End Result

Click to Enlarge
Not really the end result since this project is still enjoying a healthy and fruitful life, but after 9 months of careful planning, remaining flexible to the marketplace, and nurturing our most valued asset; our users, we surpassed our milestone of 100,000 visitors.
Great, But Is It Repeatable?

Click to Enlarge
In case you weren’t already thinking it, you are now.

The answer is Yes.

Taking what we learned and applying the concept of pumpkin hacking, we started a new blog at the end of July 2012 to test the transferability of our strategy, and here were the results:

In the first 12 days we had over 17,000 visitors. In the first full month, we had over 50,000 unique visitors coming to the website over 100,000 times (see below).

Click to Enlarge
And it didn’t slow down…

Click to Enlarge

By the end of the 3rd month we were receiving over 100,000 unique visitors, and over 200,000 visits.

Conclusion

This is very possible.

With careful planning, an SEO focused content strategy, and an understanding of the power of information architecture – you can grow a new website to over 100,000 organic visitors per month in less than 1 year.

Please share your thoughts, feelings, and questions in the comments below.

Thanks for reading.

Related Articles:


About the Author

This article is written by Nick - Nick is the VP of Digital Strategy at W.L. Snook & Associates, Co-Founder of I'm From The Future an eCommerce consultancy, and the author of Seonick Blog. Follow Nick on Google+.
9 March 2014

What's the Difference Between Iframe ad-tag and Script ad-tag in Online Advertising


What's the Difference Between Iframe ad-tag and Script ad-tag in Online Advertising

This is a list that I have discussed many times with friends, however I never found these on a single place so here you go ...

Differences between iframe tag and script tag:
  • Iframe tag does not delay the loading of the web-page elements: Iframes usually load in parallel, so for example if you have several elements in a page like images, CSS, JavaScripts and HTML tags and you have the ad-tag as an iframe embedded in the page, the iframe loading would happen in parallel and it would not make your page loading slower. So, if you want page to load faster use iframe tags.
  • Script tag does not change the “referrer” property of your ad-tag: If your ad-tag is served from inside an iframe, the ad-network that serves the ad will see a referrer property different that your page url/domain. On the other hand if you use a script tag, then the referrer url remains the same as your page url and therefore your domain name. Some ad-networks that require that the ad being served from the same domain that they were created for, will therefore not work with iframe tags (therefore they will not serve ads). Most ad-networks however allow setting of a “site-alias” that allows you to set a different domain from which the ad may be served. Read more about the referrer property here.
     
  • Script tag works better for ad-networks that do contextual analysis of the content of the page: if you use iframe tags, ad-networks will not be able to look outside of the iframe therefore they will not be able to do on-the-fly contextual analysis of the contents of the page, therefore they may serve irrelevant ads. Read more about contextual analysis here.
     
  • If there is more than one ad from the same ad-network, and you are using iframe tags, these ads may not be able to communicate amongst themselves since the scope of the JavaScript variables is within an iframe. Therefore if an ad-tag sets a JavaScript variable, which the other ad-tag on the same page is expected to read, this will break if you use iframe tags.
     
  • Since JavaScript variables have their scope only within that iframe, they don’t contaminate the namespace of the JavaScript variables of your web-page, neither do they get affected by the JavaScript variables of your web-page.
     
  • Iframe tags are easier for inclusion inside a web-page, since you can save an ad-tag in a file, and load it as an iframe into your web page. This will also allow parallel load of the ad-tag iframe. For example if your web-page is:
<html>
<script type=”” …>
</script>
<iframe
src=”ad-tag.html”></iframe>


<body>
</body>
</html>


Related:

  1. Iframe Vs Jscript Tags
  2. Ad tags and tag types: Iframe/JavaScript tags

About the Author

This article is written by Mukul Kumar - He is Co-Founder & Senior Vice President Engineering at PubMatic. For more articles, please visit his Blog.
5 March 2014

How to Fix Unresponsive Facebook Script Error: https://fbstatic-a.akamaihd.net/rsrc.php/v2/yk/r/


Browsers Hang up when upload photos to facebok, script error, unresponding error

Today I am sharing Solution of the problem I was facing from many days and after searching about it I came to know that I am not the only one!! (gives pleasure when you are not the only sufferer :D :D  )

Problem :

Using Facebook with some tabs open for more than 15- 20 minutes, a Script error comes with Heading 'warning: unresponsive Script' :
(Later part of the script differs when it is generated again or different in system)
https://fbstatic-a.akamaihd.net/rsrc.php/v2/yw/r/80Iw8OZOF5Q.js:57

How to Fix Unresponsive Script Error : https://fbstatic-a.akamaihd.net/rsrc.php/v2/yk/r/


and browsers hangs up.

Browser Affected: all (I am using Mozzilla Firefox, but after googling the problem suggested this problem is with all browsers)

Cause :

might be issue due to Image Fetching from "akamaihd.net"

In Simple words, The images uploaded to Facebook(not hosted on Facebook Server), are actually being hosted by akamaihd.net, a CDN (content delivery network) you can say, a third party provider.

How can I say that images are not hosted on Facebook but on akamaihd.net ??
- Well, go to any image you have uploaded in your time line -> right click on image -> view image(Firefox) or open image in new tab(in chrome)
check the address bar now, it will look like:

 https://fbcdn-sphotos-e-a.akamaihd.net/hphotos-ak-ash4/q71/1044574_651080944920330_1908721470_n.jpg

Solution 1:

Restoring the browser to factory settings or creating a new Profile for browser may solve this but only for sometime.
Till now, I came to only one perfect solution that too for Mozilla Firefox only.
it is an addon of firefox 'FB Phishing Protector' , version 4.3 onwards solves this problem
check it Here

Just add it... Done!!

Solution 2:

YesScript lets you make a blacklist of sites that aren't allowed to run JavaScript. Install Yesscript addon Here

Once installed, go Tools, Addons, Yesscript, Options
Type https://fbstatic-a.akamaihd.net/ and add to the Yesscript blacklist.

No more problems.


Happy Facebooking!!
:) :)


Source :

http://www.codefap.com/2013/01/what-is-akamaihd-net-in-a-facebook-link/
http://forums.mozillazine.org/viewtopic.php?f=38&t=2624819

About the Author

This article is written by Sourabh Singh - For more articles, click here. You can follow him on Google+
17 February 2014

Best Places To Buy Web Traffic


web traffic

Today’s Internet is NOT a “build it and they will come” place for businesses like brick and mortar businesses can sometimes be. You have to proactively go out and get people to come back to your website.