Icon

Multi-Template Matching

Object recognition with Multi-Template-Matching

Perform object recognition in a list of images using a set of user-provided template image(s)
Requires Python 3 environment with following packages:
- Multi-Template-Matching (MTM) 1.4
- OpenCV 3.4.2
- Scikit-Image 0.15
- Numpy
- Scipy

## PreProcess Template metanode ##Right clicking this node allow to set some parameters for template pre-processing.# FlippingThe template can be vertically and/or horizontally flipped (like mirroring) to imrpove the chance to detect the object(s).If both verical and horizontal are selected, the workflow generates 2 additional templates one vertically flipped and one horozontally flipped.It will not generate a horizontally and vertically flipped version because this result in a 180° rotation that can be prerformed in the dedicated section.# Rotating the templateThe original and flipped templates can also be rotated to generate extra template to test for matching. The angles are specified in degrees between -359 and 359° (trigonometric convention for the orientation) and separated by commas.The extra background generated by rotation is filled with the nearest pixel value from the original template.NB : increasing the number of rotations (especially with flipping), increases the computation time ! ## Match Template metanode ##The parameters available by right clicking on the Match Template metanode are explained below, in the case of the detection of N templates in one image.# Choice of scoreChoice of metrics for computation of the similarity template/image patch.By default the 0-mean cross-correlation is set. It is very robust to change of illumination.# Max number N of templates expected per imageA given object can be present several times in one image (rotated, mirrored...)The workflow can be configured to find up to a given number of objects in the images.If there are more object than N, they will not be counted, if there are less object the workflow might return extra non significative detection (if they pass the score threshold).The workflow can also return less than N detections, when the detections do not pass the score or overlap threshold.if N=0, the workflow will return all detection that pass the score and overlap threshold. NB : This will greatly increase the computation time, since overlap between detection must be computed for all of them. It is thus a good practice to use N as an approximate limit for the number of detections expected. # Max overlap of bounding boxesEach detection is associated to a similarity score and to a bounding box of dimensions identical to the template used for that search.For the detections that passed the score threshold, we perform a Non-Maxima Supression (NMS) to prevent redudant detections.To do so, we compute the overlap for each pair of bounding boxes.In practice we compute the ratio Intersection/Union (Intersection over Union or IoU) between each pair of bounding box.If this ratio is above the max overlap, it means that the bounding boxes match very close regions in the image, which are probably redundant detections. In this case we get rid of the lower score bounding box.This is repeated starting from the highest score detections until we have collected N non-overlapping detections, or until there are no more detections to evaluate. Good practices : - Using a max number N of expected hit allows to reduce the number of pairwise comparison for overlap- Have a bit of the background around the template. Since area appearing by rotation are filled with the pixels at the edge of the original template- Properly center the subject in the middle of the template image since the rotation of the template image is done around its center DocumentationSee the online wiki for further documentation.https://github.com/LauLauThom/MultiTemplateMatching/wikiNoticeThis workflow uses the template matching function from opencv https://www.docs.opencv.org/2.4/doc/tutorials/imgproc/histograms/template_matching/template_matching.htmlwith Knime 4.0.0 - Python 3.6.6Author:Laurent THOMAS (l.thomas(at)acquifer.de)PhD student at the medical faculty of HeidelbergMarie Skłodowska-Curie Actions, PhD program ImageInLifeand employee of AcquiferACQUIFER is a division of DITABIS Digital Biomedical Imaging Systems AGFreiburger Str. 375179 Pforzheim Germany List File or Image reader can be exchanged (simply plug them to the PreProcesscolumn Metanode) ######## TEMPLATE(S) MATCHING (with possible flipping, rotation of the template) ########The template matching will look for the template(s) given in the top image reader in each target images by computing a similarity score for each location in the imageIt will then return up to N location(s) with the best scores. (with N set by the user) The template matching is defined for Gray images only, ie if RGB images are provided they are first converted to a single channel gray image as part of the pre-processingThe result of the template matching is a correlation map for which pixel intensities correspond to the probability of location of the template in the image (not displayed)When a single object is expected, the global minima/maxima (depending on the score) predicts the location of the object in the image When multiple objects are expected, the algorithm detects first a set of local extrema in each score map- local maxima above the score threshold for correlation score- local minima below the score threshold for difference scoreEach hit is associated to a bounding box corresponding to the size of the template used for that search.After that, the algorithm performs a Non-Maxima Supression (NMS) to keep up to N best bounding boxes that do no overlap more than a given threshold if the overlap is larger than the threshold, we can consider that the 2 bounding boxes match the same object hence the lower score box is discardedThe result can be visualised as a mask of the bounding boxes overlaid on the image. About the template...In practice, the different templates can be for instance the same structure but at different developpement stage or WT/mutant, and the script will return the stage that best fits.The input template provided can be flipped and/or rotated within the script (right clic on the preprocess metanode)If rotation is checked every input template + its eventual flipped version will be rotated and searched (increases computation time). View list of templates that will be searched(check that the rotation is properly done) Give the template(s) to look for Give the folder containing images+ pattern for filenameOpening list of imagesHERE : See images View overlay bounding boxes+imagesGet template names Give the images for whichto search for the template(s) Right clic set flip, rotation...If Flip AND Rotation, the flipped template(s) are also rotated Right clic set the parameters for detectionand Non Maxima SuppressionoutTop : Image + MaskoutBottom : List HitView list of detectionsImage Viewer Image Reader List Files Image Reader(Table) InteractiveSegmentation View Image Properties Image Reader PreProcess Template Multi-Template-Matching Table View ## PreProcess Template metanode ##Right clicking this node allow to set some parameters for template pre-processing.# FlippingThe template can be vertically and/or horizontally flipped (like mirroring) to imrpove the chance to detect the object(s).If both verical and horizontal are selected, the workflow generates 2 additional templates one vertically flipped and one horozontally flipped.It will not generate a horizontally and vertically flipped version because this result in a 180° rotation that can be prerformed in the dedicated section.# Rotating the templateThe original and flipped templates can also be rotated to generate extra template to test for matching. The angles are specified in degrees between -359 and 359° (trigonometric convention for the orientation) and separated by commas.The extra background generated by rotation is filled with the nearest pixel value from the original template.NB : increasing the number of rotations (especially with flipping), increases the computation time ! ## Match Template metanode ##The parameters available by right clicking on the Match Template metanode are explained below, in the case of the detection of N templates in one image.# Choice of scoreChoice of metrics for computation of the similarity template/image patch.By default the 0-mean cross-correlation is set. It is very robust to change of illumination.# Max number N of templates expected per imageA given object can be present several times in one image (rotated, mirrored...)The workflow can be configured to find up to a given number of objects in the images.If there are more object than N, they will not be counted, if there are less object the workflow might return extra non significative detection (if they pass the score threshold).The workflow can also return less than N detections, when the detections do not pass the score or overlap threshold.if N=0, the workflow will return all detection that pass the score and overlap threshold. NB : This will greatly increase the computation time, since overlap between detection must be computed for all of them. It is thus a good practice to use N as an approximate limit for the number of detections expected. # Max overlap of bounding boxesEach detection is associated to a similarity score and to a bounding box of dimensions identical to the template used for that search.For the detections that passed the score threshold, we perform a Non-Maxima Supression (NMS) to prevent redudant detections.To do so, we compute the overlap for each pair of bounding boxes.In practice we compute the ratio Intersection/Union (Intersection over Union or IoU) between each pair of bounding box.If this ratio is above the max overlap, it means that the bounding boxes match very close regions in the image, which are probably redundant detections. In this case we get rid of the lower score bounding box.This is repeated starting from the highest score detections until we have collected N non-overlapping detections, or until there are no more detections to evaluate. Good practices : - Using a max number N of expected hit allows to reduce the number of pairwise comparison for overlap- Have a bit of the background around the template. Since area appearing by rotation are filled with the pixels at the edge of the original template- Properly center the subject in the middle of the template image since the rotation of the template image is done around its center DocumentationSee the online wiki for further documentation.https://github.com/LauLauThom/MultiTemplateMatching/wikiNoticeThis workflow uses the template matching function from opencv https://www.docs.opencv.org/2.4/doc/tutorials/imgproc/histograms/template_matching/template_matching.htmlwith Knime 4.0.0 - Python 3.6.6Author:Laurent THOMAS (l.thomas(at)acquifer.de)PhD student at the medical faculty of HeidelbergMarie Skłodowska-Curie Actions, PhD program ImageInLifeand employee of AcquiferACQUIFER is a division of DITABIS Digital Biomedical Imaging Systems AGFreiburger Str. 375179 Pforzheim Germany List File or Image reader can be exchanged (simply plug them to the PreProcesscolumn Metanode) ######## TEMPLATE(S) MATCHING (with possible flipping, rotation of the template) ########The template matching will look for the template(s) given in the top image reader in each target images by computing a similarity score for each location in the imageIt will then return up to N location(s) with the best scores. (with N set by the user) The template matching is defined for Gray images only, ie if RGB images are provided they are first converted to a single channel gray image as part of the pre-processingThe result of the template matching is a correlation map for which pixel intensities correspond to the probability of location of the template in the image (not displayed)When a single object is expected, the global minima/maxima (depending on the score) predicts the location of the object in the image When multiple objects are expected, the algorithm detects first a set of local extrema in each score map- local maxima above the score threshold for correlation score- local minima below the score threshold for difference scoreEach hit is associated to a bounding box corresponding to the size of the template used for that search.After that, the algorithm performs a Non-Maxima Supression (NMS) to keep up to N best bounding boxes that do no overlap more than a given threshold if the overlap is larger than the threshold, we can consider that the 2 bounding boxes match the same object hence the lower score box is discardedThe result can be visualised as a mask of the bounding boxes overlaid on the image. About the template...In practice, the different templates can be for instance the same structure but at different developpement stage or WT/mutant, and the script will return the stage that best fits.The input template provided can be flipped and/or rotated within the script (right clic on the preprocess metanode)If rotation is checked every input template + its eventual flipped version will be rotated and searched (increases computation time). View list of templates that will be searched(check that the rotation is properly done) Give the template(s) to look for Give the folder containing images+ pattern for filenameOpening list of imagesHERE : See images View overlay bounding boxes+imagesGet template names Give the images for whichto search for the template(s) Right clic set flip, rotation...If Flip AND Rotation, the flipped template(s) are also rotated Right clic set the parameters for detectionand Non Maxima SuppressionoutTop : Image + MaskoutBottom : List HitView list of detectionsImage Viewer Image Reader List Files Image Reader(Table) InteractiveSegmentation View Image Properties Image Reader PreProcess Template Multi-Template-Matching Table View

Nodes

Extensions

Links