Washington Web Scraping

Washington Data Scraping, Web Scraping Tennessee, Data Extraction Tennessee, Scraping Web Data, Website Data Scraping, Email Scraping Tennessee, Email Database, Data Scraping Services, Scraping Contact Information, Data Scrubbing

Monday 25 May 2015

What you need to know about web scraping: How to understand, identify, and sometimes stop

NB: This is a gust article by Rami Essaid, co-founder and CEO of Distil Networks.

Here’s the thing about web scraping in the travel industry: everyone knows it exists but few know the details.

Details like how does web scraping happen and how will I know? Is web scraping just part of doing business online, or can it be stopped? And lastly, if web scraping can be stopped, should it always be stopped?

These questions and the challenge of web scraping are relevant to every player in the travel industry. Travel suppliers, OTAs and meta search sites are all being scraped. We have the data to prove it; over 30% of travel industry website visitors are web scrapers.

Google Analytics, and most other analytics tools do not automatically remove web scraper traffic, also called “bot” traffic, from your reports – so how would you know this non-human and potentially harmful traffic exists? You have to look for it.

This is a good time to note that I am CEO of a bot-blocking company called Distil Networks, and we serve the travel industry as well as digital publishers and eCommerce sites to protect against web scraping and data theft – we’re on a mission to make the web more secure.

So I am admittedly biased, but will do my best to provide an educational account of what we’ve learned to be true about web scraping in travel – and why this is an issue every travel company should at the very least be knowledgeable about.

Overall, I see an alarming lack of awareness around the prevalence of web scraping and bots in travel, and I see confusion around what to do about it. As we talk this through I’ll explain what these “bots” are, how to find them and how to manage them to better protect and leverage your travel business.

What are bots, web scrapers and site indexers? Which are good and which are bad?

The jargon around web scraping is confusing – bots, web scrapers, data extractors, price scrapers, site indexers and more – what’s the difference? Allow me to quickly clarify.

–> Bots: This is a general term that refers to non-human traffic, or robot traffic that is computer generated. Bots are essentially a line of code or a program that is created to perform specific tasks on a large scale.  Bots can include web scrapers, site indexers and fraud bots. Bots can be good or bad.

–> Web Scraper: (web harvesting or web data extraction) is a computer software technique of extracting information from websites (source, Wikipedia). Web scrapers are usually bad.

If your travel website is being scraped, it is most likely your competitors are collecting competitive intelligence on your prices. Some companies are even built to scrape and report on competitive price as a service. This is difficult to prove, but based on a recent Distil Networks study, prices seem to be main target.You can see more details of the study and infographic here.

One case study is Ryanair. They have been particularly unhappy about web scraping and won a lawsuit against a German company in 2008, incorporated Captcha in 2011 to stop new scrapers, and when Captcha wasn’t totally effective and Cheaptickets was still scraping, they took to the courts once again.

So Ryanair is doing what seems to be a consistent job of fending off web scrapers – at least after the scraping is performed. Unfortunately, the amount of time and energy that goes into identifying and stopping web scraping after the fact is very high, and usually this means the damage has been done.

This type of web scraping is bad because:

    Your competition is likely collecting your price data for competitive intelligence.

    Other travel companies are collecting your flights for resale without your consent.

    Identifying this type of web scraping requires a lot of time and energy, and stopping them generally requires a lot more.

Web scrapers are sometimes good

Sometimes a web scraper is a potential partner in disguise.

Meta search sites like Hipmunk sometimes get their start by scraping travel site data. Once they have enough data and enough traffic to be valuable they go to suppliers and OTAs with a partnership agreement. I’m naming Hipmunk because the Company is one of th+e few to fess up to site scraping, and one of the few who claim to have quickly stopped scraping when asked.

I’d wager that Hipmunk and others use(d) web scraping because it’s easy, and getting a decision maker at a major travel supplier on the phone is not easy, and finding legitimate channels to acquire supplier data is most definitely not easy.

I’m not saying you should allow this type of site scraping – you shouldn’t. But you should acknowledge the opportunity and create a proper channel for data sharing. And when you send your cease and desist notices to tell scrapers to stop their dirty work, also consider including a note for potential partners and indicate proper channels to request data access.

–> Site Indexer: Good.

Google, Bing and other search sites send site indexer bots all over the web to scour and prioritize content. You want to ensure your strategy includes site indexer access. Bing has long indexed travel suppliers and provided inventory links directly in search results, and recently Google has followed suit.

–> Fraud Bot: Always bad.

Fraud bots look for vulnerabilities and take advantage of your systems; these are the pesky and expensive hackers that game websites by falsely filling in forms, clicking ads, and looking for other vulnerabilities on your site. Reviews sections are a common attack vector for these types of bots.

How to identify and block bad bots and web scrapers

Now that you know the difference between good and bad web scrapers and bots, how do you identify them and how do you stop the bad ones? The first thing to do is incorporate bot-identification into your website security program. There are a number of ways to do this.

In-house

When building an in house solution, it is important to understand that fighting off bots is an arms race. Every day web scraping technology evolves and new bots are written. To have an effective solution, you need a dynamic strategy that is always adapting.

When considering in-house solutions, here are a few common tactics:

    CAPTCHAs – Completely Automated Public Turing Tests to Tell Computers and Humans Apart (CAPTCHA), exist to ensure that user input has not been generated by a computer. This has been the most common method deployed because it is simple to integrate and can be effective, at least at first. The problem is that Captcha’s can be beaten with a little workand more importantly, they are a nuisance to end usersthat can lead to a loss of business.

    Rate Limiting- Advanced scraping utilities are very adept at mimicking normal browsing behavior but most hastily written scripts are not. Bots will follow links and make web requests at a much more frequent, and consistent, rate than normal human users. Limiting IP’s that make several requests per second would be able to catch basic bot behavior.

    IP Blacklists - Subscribing to lists of known botnets & anonymous proxies and uploading them to your firewall access control list will give you a baseline of protection. A good number of scrapers employ botnets and Tor nodes to hide their true location and identity. Always maintain an active blacklist that contains the IP addresses of known scrapers and botnets as well as Tor nodes.

    Add-on Modules – Many companies already own hardware that offers some layer of security. Now, many of those hardware providers are also offering additional modules to try and combat bot attacks. As many companies move more of their services off premise, leveraging cloud hosting and CDN providers, the market share for this type of solution is shrinking.

    It is also important to note that these types of solutions are a good baseline but should not be expected to stop all bots. After all, this is not the core competency of the hardware you are buying, but a mere plugin.

Some example providers are:

    Impreva SecureSphere- Imperva offers Web Application Firewalls, or WAF’s. This is an appliance that applies a set of rules to an HTTP connection. Generally, these rules cover common attacks such as Cross-site Scripting (XSS) and SQL Injection. By customizing the rules to your application, many attacks can be identified and blocked. The effort to perform this customization can be significant and needs to be maintained as the application is modified.

    F5 – ASM – F5 offers many modules on their BigIP load balancers, one of which is the ASM. This module adds WAF functionality directly into the load balancer. Additionally, F5 has added policy-based web application security protection.

Software-as-a-service

There are website security software options that include, and sometimes specialize in web scraping protection. This type of solution, from my perspective, is the most effective path.

The SaaS model allows someone else to manage the problem for you and respond with more efficiency even as new threats evolve.  Again, I’m admittedly biased as I co-founded Distil Networks.

When shopping for a SaaS solution to protect against web scraping, you should consider some of the following factors:

•    Does the provider update new threats and rules in real time?

•    How does the solution block suspected non-human visitors?

•    Which types of proactive blocking techniques, such as code injections, does the provider deploy?

•    Which of the reactive techniques, such as rate limiting, are used?

•    Does the solution look at all of your traffic or a snapshot?

•    Can the solution block bots before they reach your infrastructure – and your data?

•    What kind of latency does this solution introduce?

I hope you now have a clearer understanding of web scraping and why it has become so prevalent in travel, and even more important, what you should do to protect and leverage these occurrences.

Source: http://www.tnooz.com/article/what-you-need-to-know-about-web-scraping-how-to-understand-identify-and-sometimes-stop/

No comments:

Post a Comment