Skip to content
/ scrawler Public
forked from KadekM/scrawler

Scala web crawling and scraping using fs2 streams

License

Notifications You must be signed in to change notification settings

visox/scrawler

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

78 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

S(cala) Crawler

Build Status Maven Central

Install

libraryDependencies += "com.marekkadek" %% "scrawler" % "0.0.3"

Library cross compiles for Scala 2.11 and 2.12.

Usage

Crawlers

You can create your specific crawler by subclassing Crawler class. Lets see how would it look, for a crawler who's effects (crawling web) are captured by fs2.Task and that gives us data only in form of String. Let's make a crawler that follows every https link and gives us url's of websites.

class MyCrawler extends Crawler[Task, String](Seq(JsoupBrowser[Task])) {
  override protected def onDocument(document: Document): Stream[Task, Yield[String]] = {
      val title = YieldData(document.location)
      val followableLinks = document.root
        .select("a[href^='https://']")  // follow only links starting by https
        .toSeq
        .flatMap(_.attr("href")) // get the href attribute from link
        .map(Visit) // visit those links

      // first yield title of website as data, and then continue by visiting links
      Stream.emit(title) ++ Stream.emits(followableLinks)
  }
}

We are streaming actions such as YieldData and Visit, which are currently only two allowed. Here's how Yield is defined:

sealed trait Yield[+A]
final case class YieldData[A](a: A) extends Yield[A]
final case class Visit(url: String) extends Yield[Nothing]

We can execute either sequential or parallel crawling.

val crawler = new MyCrawler

// crawl wikipedia sequentially and take 10 elements (titles of visited websites)
val titles: Vector[String] = crawler.sequentialCrawl("https://wikipedia.org")
    .take(10).runLog.unsafeRun

// crawl wikipedia in parallel and take 10 elements(titles of visited websites)
implicit val strategy: Strategy = Strategy.fromFixedDaemonPool(128)
val titles2: Vector[String] = crawler.parallelCrawl("https://wikipedia.org", maxConnections = 8)
    .take(10).runLog.unsafeRun

You might as well pipe them into file or kafka or anything that is happy with fs2 :)

As observed in example when extending Crawler, it takes sequence of browsers to use during crawling. By default, it randomly selects which browser to use. You can change this behaviour by overriding pickBrowser method.

class MyCrawler extends Crawler[Task, String](Seq(JsoupBrowser[Task])) {
  override protected def onDocument(document: Document): Stream[Task, Yield[String]] = ???

  // picking browser may be effectful
  override protected def pickBrowser(forUrl: String): Task[Browser[Task]] = ???
 }

Browsers

Any browser that implements Browser trait can be used. Currently, there is JsoupBrowser, and HtmlUnit (work in progress).

To create JsoupBrowser, you can use JsoupBrowser[Task] (or different effect if you're not using Task). It has several overloads, i.e. you can also pass in proxy (or user agent or so):

val proxy = ProxySettings.http("122.193.14.106", 81)
val browser = JsoupBrowser[Task](proxy)
val browser2 = JsoupBrowser[Task](connectionTimeout = 5.seconds,
    userAgent = "Mozilla",
    validateTLSCertificates = false)

Credits

Greatly inspired by awesome [https://github.com/ruippeixotog/scala-scraper](Rui's scala-scraper) and python's Scrapy. Thank you!

About

Scala web crawling and scraping using fs2 streams

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • HTML 85.2%
  • Scala 14.8%