0 ×

TensorFlow Network Executor

StreamableKNIME Deep Learning - TensorFlow Integration version 4.3.0.v202012011122 by KNIME AG, Zurich, Switzerland

This node executes a TensorFlow deep learning network on a compatible external back end that can be selected by the user.

Options

General Settings

Back end
The deep learning back end which is used to execute the input network for the given input data.
Input batch size
The number of rows that are processed at a time.

Inputs

Conversion
The converter that is used to transform the selected input columns into a format that is accepted by the respective network input specification.
Input columns
The table columns that are part of the respective network input. The availability of a column depends on the currently selected input converter.

Outputs

Conversion
The converter that is used to transform the network output into table columns.
Output columns prefix
The prefix that is used to distinguish between the columns of the different outputs.

GPU Configuration

Visible devices list
A comma-separated list of GPU ids that determines the 'visible' to 'virtual' mapping of GPU devices. For example, if TensorFlow can see 8 GPU devices in the process, and one wanted to map visible GPU devices 5 and 3 as "/device:GPU:0", and "/device:GPU:1", then one would specify this field as "5,3". This field is similar in spirit to the CUDA_VISIBLE_DEVICES environment variable, except it applies to the visible GPU devices in the process and must list only valid GPU indices.

NOTE:
The GPU driver provides the process with the visible GPUs in an order which is not guaranteed to have any correlation to the *physical* GPU id in the machine. This field is used for remapping "visible" to "virtual", which means this operates only after the process starts. You should set the environment variable CUDA_VISIBLE_DEVICES prior to starting KNIME to ensure the device visibility and order (See CUDA Environment Variables ).
Per process GPU memory fraction
Fraction of the available GPU memory to allocate for each process. 1 means to allocate all of the GPU memory, 0.5 means the process allocates up to ~50% of the available GPU memory.

Input Ports

Icon
The TensorFlow deep learning network.
Icon
The input table.

Output Ports

Icon
The output table.

Best Friends (Incoming)

Best Friends (Outgoing)

Workflows

Installation

To use this node in KNIME, install KNIME Deep Learning - TensorFlow Integration from the following update site:

KNIME 4.3

A zipped version of the software site can be downloaded here.

You don't know what to do with this link? Read our NodePit Product and Node Installation Guide that explains you in detail how to install nodes to your KNIME Analytics Platform.

Wait a sec! You want to explore and install nodes even faster? We highly recommend our NodePit for KNIME extension for your KNIME Analytics Platform. Browse NodePit from within KNIME, install nodes with just one click and share your workflows with NodePit Space.

Developers

You want to see the source code for this node? Click the following button and we’ll use our super-powers to find it for you.