Washington Web Scraping

Washington Data Scraping, Web Scraping Tennessee, Data Extraction Tennessee, Scraping Web Data, Website Data Scraping, Email Scraping Tennessee, Email Database, Data Scraping Services, Scraping Contact Information, Data Scrubbing

Sunday 31 May 2015

Data Scraping Services - Login to Website Programmatically using C# for Web Scraping

In many scenario the data is available after login that you want to scrape. So to reach at the page where data is located you need to implement code in web scraper  that automatically takes usename/email and password to login into website, once login is done you can do crawling and parsing as required.

Many third party web scraping application provides functionality where you can locate login url and set login parameters and that login task will be called when scraper start and do web scraping.

Below is C# example of programmatically  login to demo login page

http://demo.webdata-scraping.com/login.php

Below is HTML code of Login form:

<form class="form-signin" id="login" method="post" role="form"> <h3 class="form-signin-heading">Please sign in</h3> <a href="#" id="flipToRecover" class="flipLink"> <div id="triangle-topright"></div> </a> <input type="email" class="form-control" name="loginEmail" id="loginEmail" placeholder="Email address" required autofocus> <input type="password" class="form-control" name="loginPass" id="loginPass" placeholder="Password" required> <button class="btn btn-lg btn-primary btn-block" name="login_submit" id="login_submit" type="submit">Sign in</button> </form>

<form class="form-signin" id="login" method="post" role="form">

            <h3 class="form-signin-heading">Please sign in</h3>

            <a href="#" id="flipToRecover" class="flipLink">

              <div id="triangle-topright"></div>

            </a>

            <input type="email" class="form-control" name="loginEmail" id="loginEmail" placeholder="Email address" required autofocus>

            <input type="password" class="form-control" name="loginPass" id="loginPass" placeholder="Password" required>

            <button class="btn btn-lg btn-primary btn-block" name="login_submit" id="login_submit" type="submit">Sign in</button>

</form>

In this code you can notice there is ID for email input box that is id=”loginEmail”  and password input box that is id=”loginPass”

so by taking this ID we will use below two method of webBrowser control and fill the value of each input box using following code

webBrowser1.Document.GetElementById("loginEmail").InnerText =textBox1.Text.ToString(); webBrowser1.Document.GetElementById("loginPass").InnerText = textBox2.Text.ToString();

webBrowser1.Document.GetElementById("loginEmail").InnerText =textBox1.Text.ToString();

webBrowser1.Document.GetElementById("loginPass").InnerText = textBox2.Text.ToString();

After the value filled to Email and Password input box we will just call click event of submit button which is named as Sign In

webBrowser1.Document.GetElementById("login_submit").InvokeMember("click");

webBrowser1.Document.GetElementById("login_submit").InvokeMember("click");

So this is very basic example how you can login to website programatically when you need to access data that is available after login to website.  This is very simple way in which you can work with Web Browser control but there are some other way as well using which you can do same thing.

Source: http://webdata-scraping.com/login-website-programmatically-using-c-web-scraping/

Thursday 28 May 2015

Data Extraction Services

Are you finding it tedious to perform your routine tasks as well as finding time to research for some information? Don't worry; all you have to do is outsource data extraction requirements to reliable service providers such as Hi-Tech BPO Services.

We can assist you in finding, extracting, gathering, processing and validating all the required data through our effective data extraction services. We can extract data from any given source such as websites, databases, printed documents, directories, etc.

With a whole plethora of data extraction services solutions; we are definitely a one stop solution to all your data extraction services requirements.

For utilizing our data extraction services, all you have to do is outsource data extraction requirements to us, and we will create effective strategies and extract the required data from all preferred sources. Then we will arrange all the extracted data in a systematic order.

Types of data extraction services provided by our data extraction India unit:

The data extraction India unit of Hi-Tech BPO Services can attend to all types of outsource data extraction requirements. Following are just some of the data extraction services we have delivered:

•    Data extraction from websites
•    Data extraction from databases
•    Extraction of data from directories
•    Extracting data from books
•    Data extraction from forms
•    Extracting data from printed materials

Features of Our Data Extraction Services:

•    Reliable collection of resources for data extraction
•    Extensive range of data extraction services
•    Data can be extracted from any available source be it a digital source or a hard copy source
•    Proper researching, extraction, gathering, processing and validation of data
•    Reasonably priced data extraction services
•    Quality and confidentiality ensured through various strict measures

Our data extraction India unit has the competency to handle any of your data extraction services requirements. Just provide us with your specific requirements and we will extract data accordingly from your preferred resources, if particularly specified. Otherwise we will completely rely on our collection of resources for extracting data for you.

Source: http://www.hitechbposervices.com/data-extraction.php

Monday 25 May 2015

What you need to know about web scraping: How to understand, identify, and sometimes stop

NB: This is a gust article by Rami Essaid, co-founder and CEO of Distil Networks.

Here’s the thing about web scraping in the travel industry: everyone knows it exists but few know the details.

Details like how does web scraping happen and how will I know? Is web scraping just part of doing business online, or can it be stopped? And lastly, if web scraping can be stopped, should it always be stopped?

These questions and the challenge of web scraping are relevant to every player in the travel industry. Travel suppliers, OTAs and meta search sites are all being scraped. We have the data to prove it; over 30% of travel industry website visitors are web scrapers.

Google Analytics, and most other analytics tools do not automatically remove web scraper traffic, also called “bot” traffic, from your reports – so how would you know this non-human and potentially harmful traffic exists? You have to look for it.

This is a good time to note that I am CEO of a bot-blocking company called Distil Networks, and we serve the travel industry as well as digital publishers and eCommerce sites to protect against web scraping and data theft – we’re on a mission to make the web more secure.

So I am admittedly biased, but will do my best to provide an educational account of what we’ve learned to be true about web scraping in travel – and why this is an issue every travel company should at the very least be knowledgeable about.

Overall, I see an alarming lack of awareness around the prevalence of web scraping and bots in travel, and I see confusion around what to do about it. As we talk this through I’ll explain what these “bots” are, how to find them and how to manage them to better protect and leverage your travel business.

What are bots, web scrapers and site indexers? Which are good and which are bad?

The jargon around web scraping is confusing – bots, web scrapers, data extractors, price scrapers, site indexers and more – what’s the difference? Allow me to quickly clarify.

–> Bots: This is a general term that refers to non-human traffic, or robot traffic that is computer generated. Bots are essentially a line of code or a program that is created to perform specific tasks on a large scale.  Bots can include web scrapers, site indexers and fraud bots. Bots can be good or bad.

–> Web Scraper: (web harvesting or web data extraction) is a computer software technique of extracting information from websites (source, Wikipedia). Web scrapers are usually bad.

If your travel website is being scraped, it is most likely your competitors are collecting competitive intelligence on your prices. Some companies are even built to scrape and report on competitive price as a service. This is difficult to prove, but based on a recent Distil Networks study, prices seem to be main target.You can see more details of the study and infographic here.

One case study is Ryanair. They have been particularly unhappy about web scraping and won a lawsuit against a German company in 2008, incorporated Captcha in 2011 to stop new scrapers, and when Captcha wasn’t totally effective and Cheaptickets was still scraping, they took to the courts once again.

So Ryanair is doing what seems to be a consistent job of fending off web scrapers – at least after the scraping is performed. Unfortunately, the amount of time and energy that goes into identifying and stopping web scraping after the fact is very high, and usually this means the damage has been done.

This type of web scraping is bad because:

    Your competition is likely collecting your price data for competitive intelligence.

    Other travel companies are collecting your flights for resale without your consent.

    Identifying this type of web scraping requires a lot of time and energy, and stopping them generally requires a lot more.

Web scrapers are sometimes good

Sometimes a web scraper is a potential partner in disguise.

Meta search sites like Hipmunk sometimes get their start by scraping travel site data. Once they have enough data and enough traffic to be valuable they go to suppliers and OTAs with a partnership agreement. I’m naming Hipmunk because the Company is one of th+e few to fess up to site scraping, and one of the few who claim to have quickly stopped scraping when asked.

I’d wager that Hipmunk and others use(d) web scraping because it’s easy, and getting a decision maker at a major travel supplier on the phone is not easy, and finding legitimate channels to acquire supplier data is most definitely not easy.

I’m not saying you should allow this type of site scraping – you shouldn’t. But you should acknowledge the opportunity and create a proper channel for data sharing. And when you send your cease and desist notices to tell scrapers to stop their dirty work, also consider including a note for potential partners and indicate proper channels to request data access.

–> Site Indexer: Good.

Google, Bing and other search sites send site indexer bots all over the web to scour and prioritize content. You want to ensure your strategy includes site indexer access. Bing has long indexed travel suppliers and provided inventory links directly in search results, and recently Google has followed suit.

–> Fraud Bot: Always bad.

Fraud bots look for vulnerabilities and take advantage of your systems; these are the pesky and expensive hackers that game websites by falsely filling in forms, clicking ads, and looking for other vulnerabilities on your site. Reviews sections are a common attack vector for these types of bots.

How to identify and block bad bots and web scrapers

Now that you know the difference between good and bad web scrapers and bots, how do you identify them and how do you stop the bad ones? The first thing to do is incorporate bot-identification into your website security program. There are a number of ways to do this.

In-house

When building an in house solution, it is important to understand that fighting off bots is an arms race. Every day web scraping technology evolves and new bots are written. To have an effective solution, you need a dynamic strategy that is always adapting.

When considering in-house solutions, here are a few common tactics:

    CAPTCHAs – Completely Automated Public Turing Tests to Tell Computers and Humans Apart (CAPTCHA), exist to ensure that user input has not been generated by a computer. This has been the most common method deployed because it is simple to integrate and can be effective, at least at first. The problem is that Captcha’s can be beaten with a little workand more importantly, they are a nuisance to end usersthat can lead to a loss of business.

    Rate Limiting- Advanced scraping utilities are very adept at mimicking normal browsing behavior but most hastily written scripts are not. Bots will follow links and make web requests at a much more frequent, and consistent, rate than normal human users. Limiting IP’s that make several requests per second would be able to catch basic bot behavior.

    IP Blacklists - Subscribing to lists of known botnets & anonymous proxies and uploading them to your firewall access control list will give you a baseline of protection. A good number of scrapers employ botnets and Tor nodes to hide their true location and identity. Always maintain an active blacklist that contains the IP addresses of known scrapers and botnets as well as Tor nodes.

    Add-on Modules – Many companies already own hardware that offers some layer of security. Now, many of those hardware providers are also offering additional modules to try and combat bot attacks. As many companies move more of their services off premise, leveraging cloud hosting and CDN providers, the market share for this type of solution is shrinking.

    It is also important to note that these types of solutions are a good baseline but should not be expected to stop all bots. After all, this is not the core competency of the hardware you are buying, but a mere plugin.

Some example providers are:

    Impreva SecureSphere- Imperva offers Web Application Firewalls, or WAF’s. This is an appliance that applies a set of rules to an HTTP connection. Generally, these rules cover common attacks such as Cross-site Scripting (XSS) and SQL Injection. By customizing the rules to your application, many attacks can be identified and blocked. The effort to perform this customization can be significant and needs to be maintained as the application is modified.

    F5 – ASM – F5 offers many modules on their BigIP load balancers, one of which is the ASM. This module adds WAF functionality directly into the load balancer. Additionally, F5 has added policy-based web application security protection.

Software-as-a-service

There are website security software options that include, and sometimes specialize in web scraping protection. This type of solution, from my perspective, is the most effective path.

The SaaS model allows someone else to manage the problem for you and respond with more efficiency even as new threats evolve.  Again, I’m admittedly biased as I co-founded Distil Networks.

When shopping for a SaaS solution to protect against web scraping, you should consider some of the following factors:

•    Does the provider update new threats and rules in real time?

•    How does the solution block suspected non-human visitors?

•    Which types of proactive blocking techniques, such as code injections, does the provider deploy?

•    Which of the reactive techniques, such as rate limiting, are used?

•    Does the solution look at all of your traffic or a snapshot?

•    Can the solution block bots before they reach your infrastructure – and your data?

•    What kind of latency does this solution introduce?

I hope you now have a clearer understanding of web scraping and why it has become so prevalent in travel, and even more important, what you should do to protect and leverage these occurrences.

Source: http://www.tnooz.com/article/what-you-need-to-know-about-web-scraping-how-to-understand-identify-and-sometimes-stop/

Sunday 24 May 2015

Roles of Data Mining in Predicting, Tracking, and Containing the Ebola Outbreak

One of the most diverse continents on earth, Africa astounds the world with its vast savannas and great deserts and with its ancient architecture and modern cities, but Africa also has its share of tragedies and woes.

First identified in Democratic Republic of Congo’s Ebola River in 1976, Ebola Hemorrhagic Fever, a deadly zoonotic disease caused by Ebola virus, has been spreading in West Africa like a wildfire, engulfing everything on its way and creating widespread panic.

What has added insult to injury is the fact that the region has long endured the severe consequences of civil wars and social conflicts, and diseases like malaria, HIV/AIDS, yellow fever, cholera etc. have remained endemic to the region for a long time, causing tens of thousands of deaths every year.

Reportedly, Ebola has already killed at least 2,296 people, and there are about 3,685 confirmed cases of infection. Mortality rate has been swinging between 50% to 90%, depending on the quality of care and nutrition. According to WHO, the disease is likely to infect as much as 20,000 people before it is finally brought under control.

Crisis of Data

When it comes to healthcare management, clinical data is one of the key components. The value of data becomes more urgent in the emergency situation like that of West Africa. The more relevant data you have, the bigger picture you can create for taking aggressive measures. To use Peter Drucker’s words, “What gets measured gets managed.”

Factual data is a precondition for the doctors and health science experts working in the field for measuring and managing the situation. Data helps them to assess their successes or failures and reorient their actions. One of the important reasons why the fight against the Ebola outbreak is turning out into a losing battle is the insufficiency of data. Recently, Scientific American magazine wrote:

Right now, there are not even enough beds for sick patients nor enough data coming in to help track cases. Surveillance and tracking of those who were possibly exposed to Ebola remain inadequate.

In Science magazine, Gretchen Vogel suggests that the death toll of Ebola patients could be much higher than it is currently estimated. She says, “Exactly how many unrecorded Ebola deaths have occurred will never be known. Health officials are keeping track of suspected and probable cases, many of which are people who died before they could be tested.” Greg Slabodkin voices similar concerns in Health Data Management and points at the need of an integrated global biosurveillance system.

The absence of reliable and actionable data has badly hampered the efforts of combatting Ebola and providing proper medical care to the victims. CDC Director Dr. Tom Frieden describes it as a “fog-of-war situation”.

Data Mining: Bots Were the First to Warn

When you flip the coin, however, the situation is not completely bleak and desperate. Even if Big Data technologies have fallen short in predicting, tracking, and containing the epidemic, mainly due to the lack of data from the ground, it has not entirely failed. Data scientists and healthcare experts world over are making concerted efforts to know, track, and defeat the Ebola virus—some on the ground and some in their labs.

The increasing level of collaboration among the biomedical specialists, geneticist, virologists, and IT experts has definitely contributed to slow down the transmission of the virulent disease dubbed as “the plague of modern day”. Médecins Sans Frontières and Healthmap.org are the excellent examples in this regard.

    “By deploying bots and crawlers and by using advanced machine learning algorithms, the Boston-based global infectious disease surveillance system, HealthMap was able to predict and raise concerns about the spread of a mysterious hemorrhagic fever in West Africa nine days earlier than WHO did.”

Run by a team of 45 researchers, epidemiologists, and software developers at Boston Children’s Hospital, HealthMap mines data from search engine queries, social media platforms, health information sites, news reports and crowd-sourced information to track the transmission of the disease and provides an up-to-date timeline report with an interactive map, making it easier for the international health agencies to devise more effective action plans.

HealthMap serves as a good example of how crucial Big Data and data mining technologies could be for handling a healthcare emergency with fact-based and data-driven decisions.

Ebola Data

In their letter to The Lancet, research scientist Rashid Ansumana and his colleagues, working on Ebola in Sierra Leone, highlighted on the need of developing epidemic surveillance systems “by adopting new data-sharing technologies.” They wrote, “Emerging technologies can help early warning systems, outbreak response, and communication between health-care providers, wildlife and veterinary professionals, local and national health authorities, and international health agencies.”

Data-Driven Initiatives to Control the Outbreak

The era of systematic use of data for making better epidemiological predictions and for finding effective healthcare solutions began with Google Flue Trends in 2007, and the rapidly developing tools, technologies, and practices in Big Data have increased the roles of data in healthcare management.

There are a number of data-driven undertakings in progress which have contributed to counter the raging spread of Ebola. Brockmann Lab, run by Professor Dirk Brockmann and his colleagues, for example, has created a computer model for studying correlations and probabilities in the explosion of new cases of infection.

World Airtraffic  Transportation and Relative Import Risk, Source: Brockmann Lab

By applying computational and statistical models, they predict which areas, cities or regions in the world are at the risk of becoming the next Ebola epidemic hotspots. Similarly, Alessandro Vespignani–a network scientist, statistical physicist, and Northeastern professor–has been using human mobility network data to track the cases of Ebola infection and dissemination.

The Swedish NGO Flowminder Foundation has been aggregating, mining, and analyzing anonymized mobile phone location data and is developing national mobility estimates for West Africa to help the local and international agencies to combat the disease.

Meanwhile, innovations with Epi Info VHF, a software tool for case management, contact tracing, analysis and reporting services for Ebola and other hemorrhagic fever outbreaks and OpenStreetMap project for getting location information and spatial data of the affected areas have further helped to guide the intervention initiatives.

However, with all optimism about the growing roles of Big Data and data mining, we also need to be mindful about their limitations. Newsweek aptly puts: “While no media-trawling bot could ever replace national and international health agencies, such tools may be starting to help fill in some of the most gaping holes in real-time knowledge.”

Source: http://www.grepsr.com/blog/data-mining-tracking-ebola-outbreak/

Friday 22 May 2015

Hard-Scraped Hardwood Flooring: Restoration of History

Throughout History hardwood flooring has undergone dramatic changes from the meticulous hard-scraped hardwood polished floors of majestic plantations of the Deep South, to modern day technology providing maintenance free wood flooring designed for comfort and appearance. The hand-scraped hardwood floors of the South, depicted charm with old rustic nature and character that was often associated with this time era. To date, hand-scraped hardwood flooring is being revitalized and used in up-scale homes and places of businesses to restore the old country charm that once faded into oblivion.

As the name implies, hand-scraped flooring involves the retexturing the top layer of flooring material by various methods in an attempts to mimic the rustic appearance of flooring in yesteryears. Depending on the degree of texture required, hand scraping hardwood material is often accomplished by highly skilled craftsmen with specialized tools and years of experience perfecting this procedure. When properly done, hand-scraped hardwood floors add texture, richness and uniqueness not offered in any similar hardwood flooring product.

Rooted with history, these types of floors are available in finished or unfinished surfaces. The majority of the individuals selecting hand-scraped hardwood flooring elect a prefinished floor to reduce costs per square foot in installation and finishing labor charges, allowing for budget guidelines to bend, not break. As expected, hand-scraped flooring is expensive and depending on the grade and finish selected, can range from $15-40$ per square foot and beyond for material only. Preparation of the material is labor intensive adding to the overall cost per square foot dramatically. Recommended professional installation can and often does increase the cost per square foot as well, placing this method of hardwood flooring well out of reach of the average hardwood floor purchaser.

With numerous selections of hand-scraped finishes available, each finish is designed to bring out a different appearance making it a one-of-a-kind work of art. These numerous finish selections include:

• Time worn aged, dark coloring stain application bringing out grain characteristics

• Wire brushed, providing a highlighted "grainy" effect with obvious rough texture

• Hand sculpted, smoother distressed uniform appearance

• French Bleed, staining of edges and side joints with a much darker stain to give a bleeding effect to the wood

• Hand Hewn or Rough Sawn, with visible and noticeable saw marks

Regardless of the selection made, scraped flooring cannot be compared to any other available flooring material based on durability, strength and visual appearance. Limited by only the imagination and creativity, several wood species can be used to create unusual floor patterns, highlighting main focal points of personal libraries and art collections.

The precise process utilized in the creation of scraped floors projects a custom look with deep color and subtle warm highlights. With radiant natural light reflecting off this type of floor, the effect of beauty and depth is radiated in a fashion that fills the room with solitude and serenity encompassing all that enter. Hand-scraped hardwood floors speak of the past, a time of decent, a time or war and ambiguity towards other races and the blood- shed so that all men could be treated as equals. More than exquisite flooring, hand-scraped hardwood flooring is the restoration of History.

Source: http://ezinearticles.com/?Hard-Scraped-Hardwood-Flooring:-Restoration-of-History&id=6333218

Monday 18 May 2015

Introducing ScrapeShield: Discover, Defend & Deter Content Scraping

If you're a publisher, whether an individual blogger or major media outlet, you've undoubtedly experienced content scraping. Searching the web for an article you've published or other original content you've created and you find it copied and republished on some other random website. Often the site will be full of ads. And, sometimes, it will even rank higher in search results than your original work.

While you may envision an army of individuals copying and pasting your content on their sites, the truth is content scraping is typically an automated process with bots that grab original content and then republish it without human intervention onto link farm sites. CloudFlare has blocked many of these bots automatically in the past, but we decided it was time to do something to more actively stop them.

Introducing ScrapeShield

ScrapeShield is an app created by the CloudFlare team. It incorporates several existing CloudFlare features like email obfuscation and hotlink protection that serve to protect from content scraping and adds a number of new features as well. Because we believe every publisher of original content should be able to understand and control how their work is used, we're providing ScrapeShield free for every CloudFlare user.

Detect, Defend & Deter

ScrapeShield has different elements to help you detect when your content is scraped, defend your site against content scrapers, and even deter content scrapers from targeting you in the first place. If you enable ScrapeShield, CloudFlare will automatically insert invisible tracking beacons in your content. When automated bots scrape your content, they pull the beacons along with them. CloudFlare detects these beacons when they ping from sites that aren't your own. You can access your ScrapeShield control panel to see where your content is being republished. Not only is this useful in showing scraping, but you can also see users who are reading your content through proxy services like Flipboard or Pulse.

The data from the content beacons is fed back into CloudFlare's protection system. As CloudFlare identifies content scraping bots, we automatically prevent them from accessing your site. Just like Project Honey Pot, the original inspiration for CloudFlare, used traps to detect when spammers were harvesting email addresses, CloudFlare now uses data from ScrapeShield to identify content scrapers and keep them off publishers' sites.

Maze

We didn't want to just stop scrapers from attacking sites on CloudFlare, we also wanted to tie up their resources so they couldn't harm the rest of the web. To do this, we created Maze. Maze routes known content scrapers who are visiting ScrapeShield-protected sites into a virtual labyrinth of gibirish and gobbledygook. We dynamically throttle the bandwidth and speed so instead of the pages loading as fast as possible, the connection is held open to the scrapers and their resources are tied up.

We use excess resources on the CloudFlare network to generate Maze, and it doesn't consume any of our publishers' resources or add any additional load to their sites. What's beautiful about the system is that the only way that content scrapers can be sure they're avoiding Maze is to avoid CloudFlare's IP addresses entirely. For any content scrapers who may be reading this, here's a helpful list of all of our IPs so you can make sure to stay away.

No Pinning

Finally, with the rise of sites like Pinterest, innocent content scraping may become even more prolific. While many sites welcome their images being pinned, we wanted to make it easy to opt out. ScrapeShield includes an option to add the no-pinning meta tag to your site to prevent your images from being pinned to the site. As other similar services include a mechanism to opt out, expect that we'll include an easy way for you to do so right from the ScrapeShield interface.

The health of the web depends on publishers creating original content getting credit for their creations. Cloud Flare is committed to building a better web and we're extremely excited about ScrapeShield as a new tool to help publishers do exactly that.

Source: https://blog.cloudflare.com/introducing-scrapeshield-discover-defend-dete/

Thursday 14 May 2015

Kimono Is A Smarter Web Scraper That Lets You “API-ify” The Web, No Code Required

A new Y Combinator-backed startup called Kimono wants to make it easier to access data from the unstructured web with a point-and-click tool that can extract information from webpages that don’t have an API available. And for non-developers, Kimono plans to eventually allow anyone track data without needing to understand APIs at all.

This sort of smarter “web scraper” idea has been tried before, and has always struggled to find more than a niche audience. Previous attempts with similar services like Dapper or Needlebase, for example, folded. Yahoo Pipes still chugs along, but it’s fair to say that the service has long since been a priority for its parent company.

But Kimono’s founders believe that the issue at hand is largely timing.

“Companies more and more are realizing there’s a lot of value in opening up some of their data sets via APIs to allow developers to build these ecosystems of interesting apps and visualizations that people will share and drive up awareness of the company,” says Kimono co-founder Pratap Ranade. (He also delves into this subject deeper in a Forbes piece here). But often, companies don’t know how to begin in terms of what data to open up, or how. Kimono could inform them.

Plus, adds Ranade, Kimono is materially different from earlier efforts like Dapper or Needlebase, because it’s outputting to APIs and is starting off by focusing on the developer user base, with an expansion to non-technical users planned for the future. (Meanwhile, older competitors were often the other way around).

The company itself is only a month old, and was built by former Columbia grad school companions Ranade and Ryan Rowe. Both left grad school to work elsewhere, with Rowe off to Frog Design and Ranade at McKinsey. But over the nearly half-dozen or so years they continued their careers paths separately, the two stayed in touch and worked on various small projects together.

One of those was Airpapa.com, a website that told you which movies were showing on your flights. This ended up giving them the idea for Kimono, as it turned out. To get the data they needed for the site, they had to scrape data from several publicly available websites.

“The whole process of cleaning that [data] up, extracting it on a schedule…it was kind of a painful process,” explains Rowe. “We spent most of our time doing that, and very little time building the website itself,” he says. At the same time, while Rowe was at Frog, he realized that the company had a lot of non-technical designers who needed access to data to make interesting design decisions, but who weren’t equipped to go out and get the data for themselves.

With Kimono, the end goal is to simplify data extraction so that anyone can manage it. After signing up, you install a bookmarklet in your browser, which, when clicked, puts the website into a special state that allows you to point to the items you want to track. For example, if you were trying to track movie times, you might click on the movie titles and showtimes. Then Kimono’s learning algorithm will build a data model involving the items you’ve selected.

That data can be tracked in real time and extracted in a variety of ways, including to Excel as a .CSV file, to RSS in the form of email alerts, or for developers as a RESTful API that returns JSON. Kimono also offers “Kimonoblocks,” which lets you drop the data as an embed on a webpage, and it offers a simple mobile app builder, which lets you turn the data into a mobile web application.

For developer users, the company is currently working on an API editor, which would allow you to combine multiple APIs into one.

So far, the team says, they’ve been “very pleasantly surprised” by the number of sign-ups, which have reached ten thousand*. And even though only a month old, they’ve seen active users in the thousands.

Initially, they’ve found traction with hardware hackers who have done fun things like making an airhorn blow every time someone funds their Kickstarter campaign, for instance, as well as with those who have used Kimono for visualization purposes, or monitoring the exchange rates of various cryptocurrencies like Bitcoin and dogecoin. Others still are monitoring data that’s later spit back out as a Twitter bot.

Kimono APIs are now making over 100,000 calls every week, and usage is growing by over 50 percent per week. The company also put out an unofficial “Sochi Olympics API” to showcase what the platform can do.

The current business model is freemium based, with pricing that kicks in for higher-frequency usage at scale.

The Mountain View-based company is a team of just the two founders for now, and has initial investment from YC, YC VC and SV Angel.

Source: http://techcrunch.com/2014/02/18/kimono-is-a-smarter-web-scraper-that-lets-you-api-ify-the-web-no-code-required/

Monday 4 May 2015

Web Data Scraping - Scrape Business Data in no time

The Internet has evolved as one of the largest repositories of information for your business. You can design intelligent business processes to access a whole host of relevant information sources that will help you strategize, implement and deliver effective business objectives. Leveraging the benefits and usefulness of Web Scraping Tools is one such methodology that most businesses have adopted. Let us take a look at some of the ways it helps you easily scrape data relevant for your business.

Scraping for Business Information

Web Data Scraping is a technique, employed by most organizations. It involves the implementation of tools that help businesses extract unstructured data and convert them into usable business information. The focus of most scraping initiatives revolves around the organization’s need to glean the following information:

•    Competitor analysis to structure and strategist effectively

•    Price comparisons to price their products competitively

•    Customer feedbacks to enhance their product portfolio and provide customers with better brand experience   Market dynamics to help them identify areas of opportunities and threats

Using Scraping Tools

The abundance of information available on the Internet that helps you build up a productive business strategy can be easily extracted and leveraged to benefit your business. Tools have been designed with intuitive interface and intelligent algorithms which help in furthering this end.

Website Data Scraping tools are equipped for compatibility with a wide variety of applications so as to be able to explore a huge range of information sources.  These tools are fully automated and display the drag and drop facility ensuring users get to leverage the benefits of speed and convenience.

Data extraction tools are not only adept at extracting data, but are also equally well-equipped to combine relevant statistics from several social media platforms like YouTube, Twitter, and Google Analytics and so on. This helps businesses to analyse trends and plan strategies accordingly.

Challenges of the Data Scraping Process

Just as there is no dearth of data to be collected from the Web, there is also an abundance of web scraping tools to execute the data collection process. However, the capability of the tool to help you collect the appropriate data needs to be assured before you can proceed with its implementation. Some of the challenges faced by most businesses owing to their wrong choice of tools include the following:

•    Run-of-the-mill extraction tools are unable to scale up sufficiently in order to capture large volumes of data

•    Some tools are also unable to establish compatibility with most data sources and therefore do not provide a holistic data collection approach

•    Some tools are also not equipped to conduct an automatic detection of updates made to a data source and therefore end up providing inaccurate data.

In the light of all this it is essential that you identify the right tool for your need and select one that is embedded with an updated technology to help you achieve the following:

•    Ensure that you are able to access the appropriate data that you want

•    Help you structure it in the format you want

•    Provide quick and easy access to all available data sources no matter how complex

•    Run accurately and is a reliable source to help you churn out usable information.

Source: http://scraping-solutions.blogspot.in/2014_07_01_archive.html