Icon

Train-VGG16-binary

Train VGG16 for binary image-classification
This workflow trains a deep learning model for binary image-classification provided a classification table as generated by the qualitative annotation plugin inFiji, with a single class column (single class - button plugin).It is adapted from a previous example workflow by Christian Dietzhttps://kni.me/w/EUWPBdnVuIxuFMGf# Input tableIt should be as returned by the qualitative annotation plugin single class (button) in Fiji.Meaning the following columnsRowIndex, Folder, Image, Category, CommentFolder and Image can have different column names. You can select them in the Get Image Path node.Since this workflow is for binary classification, only 2 categories are allowed in the annotation table.For multi-classification please use the DL-VGG16-MultiClass workflow.NB: The annotated dataset should have a similar number of images for each category ! If the classes are imbalanced (much more images for one class than for the other) the model might be biased toward the more represented class.# Input imagesThe images should be single-channel grayscale images, of any bit type and size.The workflow can be adapted for RGB images (remove the RGB conversion pre-processing). # WorkflowThe workflow will open the images and perform pre-processing :- class names are converted to integer (Table PreProcessing)- downscaling (size to define by right clicking the preprocessing node, default 224x224 pixels)- intensity normlisation to range 0-1- conversion to pseudo-RGB by duplicating the gray channel- splitting of the annotated set in training/validation and test fractionThen the workflow will download a pretrained model base (VGG16) and add fresh fully-connected classification layers, to be trained.The VGG16 base is frozen, only the new classification layers are trained.Settings for the training can be accessed in the Keras Network learner node.The plot of accuracy and loss can be observed in real-time along the execution of the training (and after execution too), by right clicking on the keras networklearner node > View: Learning monitorOnce the training completed, the network is tested on the test fraction and the results can be viewed as a confusion matrix in the scorer node.The model can be saved for prediction on new images (see dedicated prediction workflow).A few more theoretical details:The model output correspond to the probability for the category of index 1 (the second category when sorted by alphabetical order).If the model output (ie the probability) is >0.5 then category 1 is predicted.Otherwise (probability < 0.5) the category 0 is predicted and the probability is 1-model outputThe workflow is designed to do these calulations and to return the predicted category names and probability directly.# Online Documentationhttps://github.com/LauLauThom/Fiji-QualiAnnotations/tree/master/KNIMEworkflows/DeepLearning-Classification# Example datasetThis workflow can be used with the example dataset uploaded on Zenodosee doi.org/10.5281/zenodo.3997728# REQUIREMENTS- KNIMEOn the KNIME side, the extensions are installed automatically, with the exception of- Knime Image Processing - Deep Learning Extension Install via File > Install KNIME Extension or directly via https://hub.knime.com/BioML-Konstanz/extensions/org.knime.knip.dl.feature/latest- PythonThe best is to let KNIME install a pre-configured environement, since updating an existing environment with tensorflow usually fails.To do so, go to File > Preferences > Knime > Python Deep Learning and select create a new environment.The environment creation takes a while, so be patient.The training will run on gpu automatically if the gpu version of keras and tensorflow are installed.- Python 3.6.10- Keras 2.2.4- TensorFlow 1.12.0 (not more otherwise the Keras trainer fails)- pandas 0.23.5 max (for KNIME) TRAINING TESTING Convertgrayscaleto RGB Train new last layersRight click to set nEpochs, Batch size...After/During execution view accuracy/lossFrozen VGG-16 Base+ add custom top View CNN input TRAINING/VALIDATION SETRight click to set pathView > Interactive view tosee count for each classesExecute on test setreturns Prediction columnOpen images View images with class vs prediction Save model as h5save class names as txt View confusion matrixTop: TrainingMiddle : ValidationBottom: TestingRight-click > Interactive view:View repartitionCategory as indexTop : Categories as indexBottom : category/index matching Right-clic to select columnsResize (right click to set size)default 224x224Normalize intensity to [0,1] Gray to RGB Keras NetworkLearner Model Image Viewer Read classificationtable Keras NetworkExecutor Image Reader(Table) Format output Image Viewer Export model Scorer (JavaScript) Split dataset Tablepre-processing Get image path ImagePre-Processing This workflow trains a deep learning model for binary image-classification provided a classification table as generated by the qualitative annotation plugin inFiji, with a single class column (single class - button plugin).It is adapted from a previous example workflow by Christian Dietzhttps://kni.me/w/EUWPBdnVuIxuFMGf# Input tableIt should be as returned by the qualitative annotation plugin single class (button) in Fiji.Meaning the following columnsRowIndex, Folder, Image, Category, CommentFolder and Image can have different column names. You can select them in the Get Image Path node.Since this workflow is for binary classification, only 2 categories are allowed in the annotation table.For multi-classification please use the DL-VGG16-MultiClass workflow.NB: The annotated dataset should have a similar number of images for each category ! If the classes are imbalanced (much more images for one class than for the other) the model might be biased toward the more represented class.# Input imagesThe images should be single-channel grayscale images, of any bit type and size.The workflow can be adapted for RGB images (remove the RGB conversion pre-processing). # WorkflowThe workflow will open the images and perform pre-processing :- class names are converted to integer (Table PreProcessing)- downscaling (size to define by right clicking the preprocessing node, default 224x224 pixels)- intensity normlisation to range 0-1- conversion to pseudo-RGB by duplicating the gray channel- splitting of the annotated set in training/validation and test fractionThen the workflow will download a pretrained model base (VGG16) and add fresh fully-connected classification layers, to be trained.The VGG16 base is frozen, only the new classification layers are trained.Settings for the training can be accessed in the Keras Network learner node.The plot of accuracy and loss can be observed in real-time along the execution of the training (and after execution too), by right clicking on the keras networklearner node > View: Learning monitorOnce the training completed, the network is tested on the test fraction and the results can be viewed as a confusion matrix in the scorer node.The model can be saved for prediction on new images (see dedicated prediction workflow).A few more theoretical details:The model output correspond to the probability for the category of index 1 (the second category when sorted by alphabetical order).If the model output (ie the probability) is >0.5 then category 1 is predicted.Otherwise (probability < 0.5) the category 0 is predicted and the probability is 1-model outputThe workflow is designed to do these calulations and to return the predicted category names and probability directly.# Online Documentationhttps://github.com/LauLauThom/Fiji-QualiAnnotations/tree/master/KNIMEworkflows/DeepLearning-Classification# Example datasetThis workflow can be used with the example dataset uploaded on Zenodosee doi.org/10.5281/zenodo.3997728# REQUIREMENTS- KNIMEOn the KNIME side, the extensions are installed automatically, with the exception of- Knime Image Processing - Deep Learning Extension Install via File > Install KNIME Extension or directly via https://hub.knime.com/BioML-Konstanz/extensions/org.knime.knip.dl.feature/latest- PythonThe best is to let KNIME install a pre-configured environement, since updating an existing environment with tensorflow usually fails.To do so, go to File > Preferences > Knime > Python Deep Learning and select create a new environment.The environment creation takes a while, so be patient.The training will run on gpu automatically if the gpu version of keras and tensorflow are installed.- Python 3.6.10- Keras 2.2.4- TensorFlow 1.12.0 (not more otherwise the Keras trainer fails)- pandas 0.23.5 max (for KNIME) TRAINING TESTING Convertgrayscaleto RGB Train new last layersRight click to set nEpochs, Batch size...After/During execution view accuracy/lossFrozen VGG-16 Base+ add custom top View CNN input TRAINING/VALIDATION SETRight click to set pathView > Interactive view tosee count for each classesExecute on test setreturns Prediction columnOpen images View images with class vs prediction Save model as h5save class names as txt View confusion matrixTop: TrainingMiddle : ValidationBottom: TestingRight-click > Interactive view:View repartitionCategory as indexTop : Categories as indexBottom : category/index matching Right-clic to select columnsResize (right click to set size)default 224x224Normalize intensity to [0,1] Gray to RGB Keras NetworkLearner Model Image Viewer Read classificationtable Keras NetworkExecutor Image Reader(Table) Format output Image Viewer Export model Scorer (JavaScript) Split dataset Tablepre-processing Get image path ImagePre-Processing

Nodes

Extensions

Links