Ask a Question

Job Descriptions

Any advice on how best to crawl for Job Descriptions from sites such as Indeed or Linkedin by keyword? Is that possible with this tool?

reactivate acc

Please, reactivate my acc

api returning fullTextContent instead of fullPageContent

Hello, When I am crawling for the same URL, the crawl is sometimes returning the fullTextContent instead of the fullPageContent crawl ID: 3280311

Trying to get title tags

Hi, I've been trying to pull only title tag from various URLs. While this works for some URLs, for some I am getting empty result. An example of URL I can't get title tag for is https://www.ljhooker.com.au/ Thanks, Josip

Trying Demo in Tester results in Error

I'm trying to debug a 80Legs app using the tester at: http://80apptester.80legs.com/ and am getting the error `TypeError: EightyAppBase is not a constructor`. As a sanity check, I tried with a demo app from the github repo: https://raw.githubusercontent.com/datafiniti/EightyApps/master/apps/LinkCollector.js Using this demo I am receiving the same error. Is the Testing app not up to date with the spider runner? Scott

Crawl returning timeout error

Hello. Crawl id 3261670, returning timeout error on relatively big website page. The same type of website page, but smaller was crawled without errors.

Demo 80Apps not working

Hi There, I tried running the DomainCollector.js app and am seeing this error on the output: ```EightyAppBase is not a constructor``` Has there been a change to the API? Scott

non-circular object error

Hi There, The output from the crawling consistently shows ```expected result of processDocument to be a non-circular object or array of non-circularobjects``` Testing the app using the test framework returns the results we are looking for. The processDocument script is capturing all external links and storing them in array, then converting this array to a JSON and adding it to the return object. ``` app.processDocument = function (html, url, headers, status, $) { const $html = this.parseHtml(html, $); const links = []; const object = new Object(); const r = /:\/\/(.[^/]+)/; const urlDomain = url.match(r)[1]; const normalizedUrlDomain = urlDomain.toLowerCase(); // gets all links in the html document $html.find('a').each(function (i, obj) { const link = app.makeLink(url, $(this).attr('href')); if (link) { const linkDomain = link.match(r)[1]; if (linkDomain.toLowerCase() !== normalizedUrlDomain) { if (!links.includes(linkDomain)) { links.push(linkDomain); } } } }); object['list'] = JSON.stringify(links); return object; } ``` Any help on how to recode this so that we can get an array of external links as opposed to this error message would be great. Scott

Could not scrape from amazon.ca

https://amazon.ca returned [{"url":"https://amazon.ca","result":"\"{}\""}]

Crawl Stuck 3230056

Hello, This crawl is stuck. What would be the way to troubleshoot this? I have a list of 10,000 urls, and the 80 legs tester only shows the backend response for 1 url. Thank you.