Releases: crwlrsoft/crawler
Releases · crwlrsoft/crawler
v2.0.0-beta.2
Added
- New methods
FileCache::prolong()
andFileCache::prolongAll()
to allow prolonging the time to live for cached responses.
v2.0.0-beta
Changed
- BREAKING: Removed methods
BaseStep::addToResult()
,BaseStep::addLaterToResult()
,BaseStep::addsToOrCreatesResult()
,BaseStep::createsResult()
, andBaseStep::keepInputData()
. These methods were deprecated in v1.8.0 and should be replaced withStep::keep()
,Step::keepAs()
,Step::keepFromInput()
, andStep::keepInputAs()
. - BREAKING: With the removal of the
addToResult()
method, the library no longer usestoArrayForAddToResult()
methods on output objects. Instead, please usetoArrayForResult()
. Consequently,RespondedRequest::toArrayForAddToResult()
has been renamed toRespondedRequest::toArrayForResult()
. - BREAKING: Removed the
result
andaddLaterToResult
properties fromIo
objects (Input
andOutput
). These properties were part of theaddToResult
feature and are now removed. Instead, use thekeep
property where kept data is added. - BREAKING: The return type of the
Crawler::loader()
method no longer allowsarray
. This means it's no longer possible to provide multiple loaders from the crawler. Instead, use the new functionality to directly provide a custom loader to a step described below. - BREAKING: Refactored the abstract
LoadingStep
class to a trait and removed theLoadingStepInterface
. Loading steps should now extend theStep
class and use the trait. As multiple loaders are no longer supported, theaddLoader
method was renamed tosetLoader
. Similarly, the methodsuseLoader()
andusesLoader()
for selecting loaders by key are removed. Now, you can directly provide a different loader to a single step using the trait's newwithLoader()
method (e.g.,Http::get()->withLoader($loader)
). - BREAKING: Removed the
PaginatorInterface
to allow for better extensibility. The oldCrwlr\Crawler\Steps\Loading\Http\Paginators\AbstractPaginator
class has also been removed. Please use the newer, improved versionCrwlr\Crawler\Steps\Loading\Http\AbstractPaginator
. This newer version has also changed: the first argumentUriInterface $url
is removed from theprocessLoaded()
method, as the URL also is part of the request (Psr\Http\Message\RequestInterface
) which is now the first argument. Additionally, the default implementation of thegetNextRequest()
method is removed. Child implementations must define this method themselves. If your custom paginator still has agetNextUrl()
method, note that it is no longer needed by the library and will not be called. ThegetNextRequest()
method now fulfills its original purpose. - BREAKING: Removed methods from
HttpLoader
:$loader->setHeadlessBrowserOptions()
=> use$loader->browser()->setOptions()
instead$loader->addHeadlessBrowserOptions()
=> use$loader->browser()->addOptions()
instead$loader->setChromeExecutable()
=> use$loader->browser()->setExecutable()
instead$loader->browserHelper()
=> use$loader->browser()
instead
- BREAKING: Removed method
RespondedRequest::cacheKeyFromRequest()
. UseRequestKey::from()
instead. - BREAKING: The
HttpLoader::retryCachedErrorResponses()
method now returns an instance of the newCrwlr\Crawler\Loader\Http\Cache\RetryManager
class. This class provides the methodsonly()
andexcept()
to restrict retries to specific HTTP response status codes. Previously, this method returned theHttpLoader
itself ($this
), so if you're using it in a chain and calling other loader methods after it, you will need to refactor your code. - BREAKING: Removed the
Microseconds
class from this package. It has been moved to thecrwlr/utils
package, which you can use instead.
v1.10.0
Added
- URL refiners:
UrlRefiner::withScheme()
,UrlRefiner::withHost()
,UrlRefiner::withPort()
,UrlRefiner::withoutPort()
,UrlRefiner::withPath()
,UrlRefiner::withQuery()
,UrlRefiner::withoutQuery()
,UrlRefiner::withFragment()
andUrlRefiner::withoutFragment()
. - New paginator stop rules
PaginatorStopRules::contains()
andPaginatorStopRules::notContains()
. - Static method
UserAgent::mozilla5CompatibleBrowser()
to get aUserAgent
instance with the user agent stringMozilla/5.0 (compatible)
and also the new methodwithMozilla5CompatibleUserAgent
in theAnonymousHttpCrawlerBuilder
that you can use like this:HttpCrawler::make()->withMozilla5CompatibleUserAgent()
.
v1.9.5
Fixed
- Prevent PHP warnings when an HTTP response includes a
Content-Type: application/x-gzip
header, but the content is not actually compressed. This issue also occurred with cached responses, because compressed content is decoded during caching. Upon retrieval from the cache, the header indicated compression, but the content was already decoded.
v1.9.4
Fixed
- When using
HttpLoader::cacheOnlyWhereUrl()
to restrict caching, the filter rule is not only applied when adding newly loaded responses to the cache, but also for using cached responses. Example: a response forhttps://www.example.com/foo
is already available in the cache, but$loader->cacheOnlyWhereUrl(Filter::urlPathStartsWith('/bar/'))
was called, the cached response is not used.
v1.9.3
Fixed
- Add
HttpLoader::browser()
as a replacement forHttpLoader::browserHelper()
and deprecate thebrowserHelper()
method. It's an alias and just because it will read a little better:$loader->browser()->xyz()
vs.$loader->browserHelper()->xyz()
.HttpLoader::browserHelper()
will be removed in v2.0. - Also deprecate
HttpLoader::setHeadlessBrowserOptions()
,HttpLoader::addHeadlessBrowserOptions()
andHttpLoader::setChromeExecutable()
. Use$loader->browser()->setOptions()
,$loader->browser()->addOptions()
and$loader->browser()->setExecutable()
instead.
v1.9.2
v1.9.1
v1.9.0
Added
- New methods
HeadlessBrowserLoaderHelper::setTimeout()
andHeadlessBrowserLoaderHelper::waitForNavigationEvent()
to allow defining the timeout for the headless chrome in milliseconds (default 30000 = 30 seconds) and the navigation event (load
(default),DOMContentLoaded
,firstMeaningfulPaint
,networkIdle
, etc.) to wait for when loading a URL.
v1.8.0
Added
- New methods
Step::keep()
andStep::keepAs()
, as well asStep::keepFromInput()
andStep::keepInputAs()
, as alternatives toStep::addToResult()
(orStep::addLaterToResult()
). Thekeep()
method can be called without any argument, to keep all from the output data. It can be called with a string, to keep a certain key or with an array to keep a list of keys. If the step yields scalar value outputs (not an associative array or object with keys) you need to use thekeepAs()
method with the key you want the output value to have in the kept data. The methodskeepFromInput()
andkeepInputAs()
work the same, but uses the input (not the output) that the step receives. Most likely only needed with a first step, to keep data from initial inputs (or in a sub crawler, see below). Kept properties can also be accessed with theStep::useInputKey()
method, so you can easily reuse properties from multiple steps ago as input. - New method
Step::outputType()
with default implementation returningStepOutputType::Mixed
. Please consider implementing this method yourself in all your custom steps, because it is going to be required in v2 of the library. It allows detecting (potential) problems in crawling procedures immediately when starting a run instead of failing after already running a while. - New method
Step::subCrawlerFor()
, allowing to fill output properties from an actual full child crawling procedure. As the first argument, you give it a key from the step's output, that the child crawler uses as input(s). As the second argument you need to provide aClosure
that receives a clone of the currentCrawler
without steps and with initial inputs, set from the current output. In theClosure
you then define the crawling procedure by adding steps as you're used to do it, and return it. This allows to achieve nested output data, scraped from different (sub-)pages, more flexible and less complicated as with the usual linear crawling procedure andStep::addToResult()
.
Deprecated
- The
Step::addToResult()
,Step::addLaterToResult()
andStep::keepInputData()
methods. Instead, please use the new keep methods. This can cause some migration work for v2, because especially the add to result methods are a pretty central functionality, but the new "keep" methodology (plus the new sub crawler feature) will make a lot of things easier, less complex and the library will most likely work more efficiently in v2.
Fixed
- When a cache file was generated with compression, and you're trying to read it with a
FileCache
instance without compression enabled, it also works. When unserializing the file content fails it tries decoding the string first before unserializing it.