This is the first workflow in the PubChem Big Data story.
In the top part of the workflow we download the assay data from the PubChem database using its API and upload it to a specified S3 bucket on AWS. One file per assay/experiment (AID).
In the bottom part we clean up the assay data using KNIME Extension for Apache Spark and store cleaned up files on AWS.
AWS Autentication component, Paths to Livy and S3 component, and Create Spark Contex (Livy) node require configuration.
To use this workflow in KNIME, download it from the below URL and open it in KNIME:Download Workflow
Do you have feedback, questions, comments about NodePit, want to support this platform, or want your own nodes or workflows listed here as well? Do you think, the search results could be improved or something is missing? Then please get in touch! Alternatively, you can send us an email to firstname.lastname@example.org, follow @NodePit on Twitter, or chat on Gitter!
Please note that this is only about NodePit. We do not provide general support for KNIME — please use the KNIME forums instead.