Icon

Crawl-pages-with-pagination

This workflow shows how article URLs can be extracted from paginated overview pages (as typically known from news websites and blogs) using a recursive loop. The example uses the Ars Technica website, but the workflow can be easily adapted to your needs:

(1) Specify a start URL,
(2) specify an XPath expression for extracting the desired article URLs,
(3) specify an XPath for extracting the 'next' link.

The loop will subsequently fetch pages, (1) extract all article links, (2) extract the 'next' link from the pagination; this link will then be used for the next iteration. The iteration will stop, when
(a) the specified amount of iterations has been reached, or (b) no more 'next' links can be extracted.

In case this workflow workflow should not properly execute, the page structure was probably modified and you’ll need to adjust the queries in the XPath nodes -- feel free to give us a heads up at mail@palladian.ws

This workflow shows how article URLs can be extracted from paginated overview pages (as typically known from news websites and blogs) using a recursive loop. The example uses the ArsTechnica website, but the workflow can be easily adapted to your needs: (1) Specify a start URL, (2) specify an XPath expression for extracting the desired article URLs, (3) specify an XPath for extracting the 'next' link.The loop will subsequently fetch pages, (1) extract all article links, (2) extract the 'next' link from the pagination; this link will then be used for the next iteration. The iteration will stop, when(a) the specified amount of iterations has been reached, or (b) no more 'next' links can be extracted.In case this workflow workflow should not properly execute, the page structure was probably modified and you’ll need to adjust the queries in the XPath nodes -- feel free to give us a heads upat mail@palladian.ws specify initialURLNode 4Node 5extract article linksNode 7Node 11specify the maxmimum number of pages to fetchthis should give 150 resultswith the WF’s default settingsextract nextlink for paginationNode 14Node 15Node 16 Table Creator HTTP Retriever HTML Parser XPath Column Filter RecursiveLoop Start Recursive Loop End XPath Column Filter Column Rename Missing Value This workflow shows how article URLs can be extracted from paginated overview pages (as typically known from news websites and blogs) using a recursive loop. The example uses the ArsTechnica website, but the workflow can be easily adapted to your needs: (1) Specify a start URL, (2) specify an XPath expression for extracting the desired article URLs, (3) specify an XPath for extracting the 'next' link.The loop will subsequently fetch pages, (1) extract all article links, (2) extract the 'next' link from the pagination; this link will then be used for the next iteration. The iteration will stop, when(a) the specified amount of iterations has been reached, or (b) no more 'next' links can be extracted.In case this workflow workflow should not properly execute, the page structure was probably modified and you’ll need to adjust the queries in the XPath nodes -- feel free to give us a heads upat mail@palladian.ws specify initialURLNode 4Node 5extract article linksNode 7Node 11specify the maxmimum number of pages to fetchthis should give 150 resultswith the WF’s default settingsextract nextlink for paginationNode 14Node 15Node 16Table Creator HTTP Retriever HTML Parser XPath Column Filter RecursiveLoop Start Recursive Loop End XPath Column Filter Column Rename Missing Value

Nodes

Extensions

Links