Frequently Asked Questions

Simply fill in the webform on our homepage.

Sure. Let's call an example customer Dave.

  1. Dave approaches us through the contact form on our website. He advises that he would like to process a competitors website's eCommerce store - recording the following information for each product. There are roughly 20,000 products to be crawled. The data needs to be updated each fortnight for competitive research.
    • Category
    • Name
    • URL
    • Price
  2. We clarify a few of Dave's queries.
  3. We send Dave a quote, which he accepts.
  4. Dave signs up to our billing management system. This system is used for billing, support and secure data access. He pays the deposit.
  5. We start work on Dave's project, and finish within three days.
  6. Dave receives an email with a secure link - allowing him to download the data in CSV format.
  7. We setup his custom software solution to run on on our servers each fortnight - delivering him a new email link each time.
We can process any publicly available website information. To be explicit, we cannot process data behind login forms. Simply show us a website and discuss with us the data you would like to collect. We will tell you if possible.
Yes - of course. We process data with strict ethical guidelines, respecting robots.txt files, rate limiting and common sense.
You may pay us with Credit Card, Bitcoin and many Alternative Crypto Currencies.
Simply log into our system by clicking the 'Login' button in the top right hand corner, and follow the prompts.
Support enquiries for existing customers can be submitted through the client area. Follow the 'Login' button in the top right hand corner, and click support. You may then create a support ticket.
We provide you with an invoice addressed to the billing address within your client area, with our Australian Business Number on it - so yes, of course, you can treat our invoices as you would any other business expense.

Good question. Most of our scrapers are written using golang, which is our preferred language for web scraping. It's a compiled language which provides huge performance benefits over traditional web scraping tools (such as python's BeautifulSoup). We utilize existing packages as well as our own to create an efficient scraper, which fits your requirements.

Once we've unit tested the golang code, we deploy cloud server instances to run the scraping work. The amount of server instances we deploy depends on the amount of work required (quantity of data to be collected, complexity of websites to be crawled, etc). Often we rotate through proxies, and use threading (go routines) to control the rate at which we process data.

This data is then stored in a database, usually postgres or mysql.

Once complete, depending on the requirements - the database is analyzed and processed into the format requirements of the customer. Often this will be a simple CSV file, which may be shared via FTP, AWS s3, Google cloud, or directly uploaded to a clients server via rsync, scp or similar.