This workflow shows how article URLs can be extracted from paginated overview pages (as typically known from news websites and blogs) using a recursive loop. The example uses the Ars Technica website, but the workflow can be easily adapted to your needs:
(1) Specify a start URL,
(2) specify an XPath expression for extracting the desired article URLs,
(3) specify an XPath for extracting the 'next' link.
The loop will subsequently fetch pages, (1) extract all article links, (2) extract the 'next' link from the pagination; this link will then be used for the next iteration. The iteration will stop, when
(a) the specified amount of iterations has been reached, or (b) no more 'next' links can be extracted.
In case this workflow workflow should not properly execute, the page structure was probably modified and you’ll need to adjust the queries in the XPath nodes -- feel free to give us a heads up at email@example.com
To use this workflow in KNIME, download it from the below URL and open it in KNIME:Download Workflow
Deploy, schedule, execute, and monitor your KNIME workflows locally, in the cloud or on-premises – with our brand new NodePit Runner.Try NodePit Runner!
Do you have feedback, questions, comments about NodePit, want to support this platform, or want your own nodes or workflows listed here as well? Do you think, the search results could be improved or something is missing? Then please get in touch! Alternatively, you can send us an email to firstname.lastname@example.org, follow @NodePit on Twitter or botsin.space/@nodepit on Mastodon.
Please note that this is only about NodePit. We do not provide general support for KNIME — please use the KNIME forums instead.