Parth Patel Parth Patel - 26 days ago 23
Java Question

Crawler4j runtime error

I have implemented a web crawler using the crawler4j library.
I am encountering the following error:

SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.


I searched for the error on Google and found that slf4j library is missing. So I downloaded the same and added it to the project after which I am getiing the error shown in the snapshot below:

Error after adding the SLF4J jar!

The code of the class is as follows:

import edu.uci.ics.crawler4j.crawler.CrawlConfig;
import edu.uci.ics.crawler4j.crawler.CrawlController;
import edu.uci.ics.crawler4j.fetcher.PageFetcher;
import edu.uci.ics.crawler4j.robotstxt.RobotstxtConfig;
import edu.uci.ics.crawler4j.robotstxt.RobotstxtServer;

public class Controller {

public static void main(String[] args) throws Exception {

String crawlStorageFolder = "/DA Project/Crawled Data";
int numberOfCrawlers = 7;

CrawlConfig config = new CrawlConfig();
/*
* You can set the location of the folder where you want your crawled
* data to be stored
*/
config.setCrawlStorageFolder(crawlStorageFolder);
/*
* Be polite: Make sure that we don't send more than 1 request per
* second (1000 milliseconds between requests).
*/
config.setPolitenessDelay(1000);

/*
* You can set the maximum crawl depth here. The default value is -1 for
* unlimited depth
*/
config.setMaxDepthOfCrawling(-1);

/*
* You can set the maximum number of pages to crawl. The default value
* is -1 for unlimited number of pages
*/
config.setMaxPagesToFetch(-1);
/*
* This config parameter can be used to set your crawl to be resumable
* (meaning that you can resume the crawl from a previously
* interrupted/crashed crawl). Note: if you enable resuming feature and
* want to start a fresh crawl, you need to delete the contents of
* rootFolder manually.
*/
config.setResumableCrawling(false);

PageFetcher pageFetcher = new PageFetcher(config);
RobotstxtConfig robotstxtConfig = new RobotstxtConfig();
RobotstxtServer robotstxtServer = new RobotstxtServer(robotstxtConfig,
pageFetcher);

try {

CrawlController controller = new CrawlController(config,
pageFetcher, robotstxtServer);

/*
* For each crawl, you need to add some seed urls. These are the
* first URLs that are fetched and then the crawler starts following
* links which are found in these pages
*/
controller
.addSeed("http://www.consumercomplaints.in/?search=chevrolet");

/*
* Start the crawl. This is a blocking operation, meaning that your
* code will reach the line after this only when crawling is
* finished.
*/
controller.start(MyCrawler.class, numberOfCrawlers);
} catch (Exception e) {
System.out.println("Caught Exception :" + e.getMessage());
e.printStackTrace();
}

}
}


Any help would be appreciated.
Thank You!

Answer

I removed the SLF4J jar file and downloaded the logback 1.1.2 jar file and added it to my project.

The link to the logback API is : http://logback.qos.ch/download.html

Jars included are:

logback-access-1.1.2
logback-access-1.1.2-sources
logback-classic-1.1.2
logback-classic-1.1.2-sources
logback-core-1.1.2
logback-core-1.1.2-sources

Hope others are benefited. Thank You.

Comments