Skip to main content
Version: v2

Migration from ScraperAPI

Migrate from ScraperAPI to ScrapingAnt web scraping API effortlessly.

The purpose of this guide is to help you migrate from Scraper API to ScrapingAnt API with little changes to your code base.

What is ScraperAPI?

ScraperAPI is a web scraping API with browser rendering and proxy rotation capabilities. It's a low-code solution that allows you to scrape websites.

It does provide plenty of request customisation options along with a simple API.

What ScrapingAnt API could offer?

ScrapingAnt is an alternative to ScraperAPI with it own set of features and technology wrapped to a similar API, so the migration itself is a simple process.

In addition to the features that ScraperAPI provides, ScrapingAnt API offers:

  • Unlimited concurrent requests
  • Price range includes small and medium businesses needs
  • Reliable and diverse proxy pool with unique proxy rotation technology
  • Cloud browser technology with ability of running JavaScript scenarios

It means that for your particular case our technology could provide a better alternative across:

  • Success rate (number of successful requests per total number of requests)
  • Speed (API request response time)
  • Cost efficiency (price per request)

Prerequisites

To start the migration process you need to have:

That's it! You are ready to start the migration process.

tip

Please, be aware that ScrapingAnt's free plan limits the number of concurrent requests to 1 and mainly used for testing/development purposes. We suggest performing production testing on smaller paid plans.

Contact us via support@scrapingant.com for pricing inquiries.

Migration process

Step 1. Make API endpoint changes

ScraperAPI sync API endpoint is http://api.scraperapi.com. It processes all the requests for web scraping tasks.

ScrapingAnt API has a set of API endpoints for different purposes:

  • General endpoint proxies response from the target website to the client, so it's a direct replacement for ScraperAPI sync API endpoint.
  • Extended endpoint allows getting the response in JSON format together with text representation of the content, cookies, headers and much more (like XHRs and iframes).

So, the first step is to replace the ScraperAPI sync API endpoint with one of the ScrapingAnt's alternatives.

Step 2. Make API request changes

Basic requests for both API require 2 parameters:

  • url - URL of the target website
  • api_key for ScraperAPI or x-api-key for ScrapingAnt API - API key to authenticate the request

The only significant difference is that ScrapingAnt renders web pages using a headless browser by default, while ScraperAPI doesn't render web pages by default.

In case you'd like to disable rendering for ScrapingAnt API you can use browser=false parameter.

For example, the basic request without browser rendering for ScraperAPI looks like this:

curl "http://api.scraperapi.com?api_key=SCRAPERAPI_API_KEY&url=https%3A%2F%2Fexample.com"

And for ScrapingAnt API:

curl "https://api.scrapingant.com/v2/general?x-api-key=SCRAPINGANT_API_KEY&url=https%3A%2F%2Fexample.com&browser=false"

And vise versa, if you'd like to enable browser rendering for SrapingAnt you can use browser=true parameter or skip using it.

curl "https://api.scrapingant.com/v2/general?x-api-key=SCRAPINGANT_API_KEY&url=https%3A%2F%2Fexample.com"

Most of the other parameters have the similar behavior, but could have different names.

The most significant ones are:

ScraperAPI parameterScrapingAnt parameterDescription
country_codeproxy_countryCountry code of the proxy to use for the request. More info here.
premiumproxy_typeUse premium proxy. ScrapingAnt allows configure proxy type via enum parameters. More info here.
ultra_premiumproxy_typeUse ultra-premium proxy. ScrapingAnt allows configure proxy type via enum parameters like datacenter and residential. More info here.
renderbrowserEnables browser rendering. More info here
keep_headersUsing ant- prefixScraperAPI allows bypassing request headers from client to API. ScrapingAnt uses a different mechanism, so in order to pass the header to a target website, ant- prefix should be added to header names. More info here
wait_for_selectorwait_for_selectorWait for a specific CSS selector to appear on the page before returning the result. No changes between both APIs. More info here

There are plenty other parameters that could be used to customise your request at both APIs. You can find the full list of parameters for ScrapingAnt API here.

Step 3. Test your changes and integration

Despite the migration process is smooth and simple, it's always a good idea to test your changes and integration before going live.

As ScrapingAnt doesn't bill for unsuccessful requests, you can test your integration without any risk. Also, we encourage our users to implement retry mechanism to handle possible detections and other possible cases.

The sample retry mechanism could look like this:

import requests
import time


def get_response(url, api_key):
response = requests.get(
f"https://api.scrapingant.com/v2/general?x-api-key={api_key}&url={url}"
)
if response.status_code == 200:
return response
elif response.status_code == 409:
# Handle rate limit for free plan
time.sleep(1)
return get_response(url, api_key)
elif response.status_code == 403:
# Handle detection
time.sleep(1)
return get_response(url, api_key)
else:
# Handle other errors
return None

You can learn more about different not-billable response codes here and prepare your own logic based on the error-type.

It's a common practice to try different combinations of browser rendering and proxy settings automatically when receiving a detection rate more than a specific threshold to keep the cost and performance balance.

For example:

import requests
import time


def get_response(url, api_key, browser, proxy_type):
response = requests.get(
f"https://api.scrapingant.com/v2/general?x-api-key={api_key}&url={url}&browser={browser}&proxy_type={proxy_type}"
)
if response.status_code == 200:
return response
elif response.status_code == 409:
# Handle rate limit for free plan
time.sleep(1)
return get_response(url, api_key, browser, proxy_type)
elif response.status_code == 403:
# Handle detection
if browser == True:
# Try without browser rendering
return get_response(url, api_key, False, proxy_type)
elif proxy_type == "residential":
# Try with datacenter proxy
return get_response(url, api_key, browser, "datacenter")
else:
# Try with residential proxy
return get_response(url, api_key, browser, "residential")
else:
# Handle other errors
return None

It's just a basic sample, but it shows a flexibility of the automation process that could be implemented based on the response information from ScrapingAnt API.

Further support

This document provides the basic information about the migration process from ScraperAPI to ScrapingAnt API.

We recommend checking the latest documentation for both APIs to have better overview across feature alternatives at ScrapingAnt.

Complex integrations could require additional support, so don't hesitate to reach us via support@scrapingant.com in case of any questions.

FAQ

What is the difference between ScrapingAnt and ScraperAPI?

ScrapingAnt is an alternative to ScraperAPI with it own set of features and technology wrapped to a simple API.

ScrapingAnt provides unique browser rendering technology that enables data extraction without getting blocked by anti-bot systems.

Where can I find the full list of parameters for ScrapingAnt API?

You can find the full list of parameters for ScrapingAnt API here.

Does ScrapingAnt provide libraries for different programming languages?

Yes, ScrapingAnt provides libraries for different programming languages:

  • Python - Python client for ScrapingAnt API
  • JS - Javascript client for ScrapingAnt API

Does ScrapingAnt provide a free plan?

Yes, ScrapingAnt provides a free plan with 10.000 API credits per month. You can subscribe to a free plan while registration process here.

Does ScrapingAnt bill for unsuccessful requests?

No, ScrapingAnt doesn't bill for unsuccessful requests. Whenever your request gets blocked by anti-bot system or you get an error response from ScrapingAnt API, you don't get billed for it.

Does ScrapingAnt limit the number of concurrent requests?

No, ScrapingAnt doesn't limit the number of concurrent requests. You can send as many concurrent requests as you need.