pyeddl.eddl — eddl API

Model

Creation

pyeddl.eddl.Model(in_, out)

Create a model (Net).

Takes a list of input layers and a list of output layers as arguments.

Parameters
  • in_ – list of input layers.

  • out – list of output layers

Returns

model instance

pyeddl.eddl.setName(m, name)
pyeddl.eddl.getLayer(net, in_)
pyeddl.eddl.removeLayer(net, l)
pyeddl.eddl.initializeLayer(net, l)
pyeddl.eddl.get_parameters(net, deepcopy=False)
pyeddl.eddl.set_parameters(net, params)
pyeddl.eddl.build(net, o=None, lo=None, me=None, cs=None, init_weights=True)

Configure the model for training.

Parameters
  • net – model

  • o – optimizer

  • lo – list of losses

  • me – list of metrics

  • cs – computing service

  • init_weights – if True, initialize weights to random values (typically set to False for pretrained models)

Returns

None

Computing services

pyeddl.eddl.toGPU(net, g=[1], lsb=1, mem='full_mem')

Assign model operations to the GPU.

Parameters
  • net – model

  • g – list of integers to set which GPUs will be used (1=on, 0=off)

  • lsb – (multi-gpu setting) number of batches to run before synchronizing the weights of the different GPUs

  • mem – memory consumption (“full_mem”, “mid_mem” or “low_mem”)

Returns

None

pyeddl.eddl.toCPU(net, th=- 1)

Assign model operations to the CPU.

Parameters
  • net – model

  • th – number of CPU threads (-1 to use all threads)

Returns

None

pyeddl.eddl.toFPGA(net, hlsinf_version=1, hlsinf_subversion=0)

Assign model operations to the FPGA.

Parameters
  • net – model

  • hlsinf_version – HLSinf accelerator version to use

  • hlsinf_subversion – HLSinf accelerator subversion to use

Returns

None

pyeddl.eddl.CS_CPU(th=- 1, mem='full_mem')

Create a computing service that executes the code in the CPU.

Parameters
  • th – number of threads to use (-1 = all available threads)

  • mem – memory consumption of the model: “full_mem”, “mid_mem” or “low_mem”

Returns

computing service

pyeddl.eddl.CS_GPU(g=[1], lsb=1, mem='full_mem')

Create a computing service that executes the code in the GPU.

Parameters
  • g – list of integers to set which GPUs will be used (1=on, 0=off)

  • lsb – (multi-gpu setting) number of batches to run before synchronizing the weights of the different GPUs

  • mem – memory consumption of the model: “full_mem”, “mid_mem” or “low_mem”

Returns

computing service

pyeddl.eddl.CS_FPGA(f, lsb=1)

Create a computing service that executes the code in the FPGA.

Parameters
  • f – list of integers to set which FPGAs will be used (1=on, 0=off)

  • lsb – (multi-fpga setting) number of batches to run before synchronizing the weights of the different FPGAs

Returns

computing service

pyeddl.eddl.CS_COMPSS(filename)

Create a computing service that executes the code in the COMPSs framework.

Parameters

filename – file with the setup specification

Returns

computing service

Info and logs

pyeddl.eddl.setlogfile(net, fname)

Save the training outputs of a model to a file.

Parameters
  • net – model

  • fname – name of the log file

Returns

None

pyeddl.eddl.summary(m)

Print a summary representation of the model.

Parameters

m – model

Returns

None

pyeddl.eddl.plot(m, fname, string='LR')

Plot a representation of the model.

Parameters
  • m – model to plot

  • fname – name of the file where the plot should be saved

Returns

None

Serialization

pyeddl.eddl.load(m, fname, format='bin')

Load weights to reinstantiate the model.

Parameters
  • m – model

  • fname – name of the file containing the model weights

Returns

None

pyeddl.eddl.save(m, fname, format='bin')

Save model weights to a file.

Parameters
  • m – model

  • fname – name of the file where model weights should be saved

Returns

None

Optimizers

pyeddl.eddl.setlr(net, p)

Change the learning rate and hyperparameters of the model optimizer.

Parameters
  • net – model

  • p – list with the learning rate and hyperparameters of the model

Returns

None

pyeddl.eddl.adadelta(lr, rho, epsilon, weight_decay)

Adadelta optimizer.

Adadelta is a more robust extension of Adagrad that adapts learning rates based on a moving window of gradient updates, instead of accumulating all past gradients. This way, Adadelta continues learning even when many updates have been done. See: https://arxiv.org/abs/1212.5701.

Parameters
  • lr – learning rate

  • rho – smoothing constant

  • epsilon – term added to the denominator to improve numerical stability

  • weight_decay – weight decay (L2 penalty)

Returns

Adadelta optimizer

pyeddl.eddl.adam(lr=0.01, beta_1=0.9, beta_2=0.999, epsilon=1e-06, weight_decay=0, amsgrad=False)

Adam optimizer.

Default parameters follow those provided in the original paper (see: https://arxiv.org/abs/1412.6980v8).

Parameters
  • lr – learning rate

  • beta_1 – coefficient for computing running averages of gradient and its square

  • beta_2 – coefficient for computing running averages of gradient and its square

  • epsilon – term added to the denominator to improve numerical stability

  • weight_decay – weight decay (L2 penalty)

  • amsgrad – whether to apply the AMSGrad variant of this algorithm from the paper “On the Convergence of Adam and Beyond”

Returns

Adam optimizer

pyeddl.eddl.adagrad(lr, epsilon, weight_decay)

Adagrad optimizer.

Adagrad is an optimizer with parameter-specific learning rates, which are adapted relative to how frequently a parameter gets updated during training. The more updates a parameter receives, the smaller the learning rate. See: http://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf

Parameters
  • lr – learning rate

  • epsilon – term added to the denominator to improve numerical stability

  • weight_decay – weight decay (L2 penalty)

Returns

Adagrad optimizer

pyeddl.eddl.adamax(lr, beta_1, beta_2, epsilon, weight_decay)

Adamax optimizer.

A variant of Adam based on the infinity norm.

Parameters
  • lr – learning rate

  • beta_1 – coefficient for computing running averages of gradient and its square

  • beta_2 – coefficient for computing running averages of gradient and its square

  • epsilon – term added to the denominator to improve numerical stability

  • weight_decay – weight decay (L2 penalty)

Returns

Adamax optimizer

pyeddl.eddl.nadam(lr, beta_1, beta_2, epsilon, schedule_decay)

Nadam optimizer.

Parameters
  • lr – learning rate

  • beta_1 – coefficients for computing running averages of gradient and its square

  • beta_2 – coefficients for computing running averages of gradient and its square

  • epsilon – term added to the denominator to improve numerical stability

  • schedule_decay – weight decay (L2 penalty)

Returns

Nadam optimizer

pyeddl.eddl.rmsprop(lr=0.01, rho=0.9, epsilon=1e-05, weight_decay=0.0)

RMSProp optimizer.

It is recommended to leave the parameters of this optimizer at their default values (except for the learning rate, which can be freely tuned). See: http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf

Parameters
  • lr – learning rate

  • rho – smoothing constant

  • epsilon – term added to the denominator to improve numerical stability

  • weight_decay – weight decay (L2 penalty)

Returns

RMSProp optimizer

pyeddl.eddl.sgd(lr=0.01, momentum=0.0, weight_decay=0.0, nesterov=False)

Stochastic gradient descent optimizer.

Includes support for momentum, learning rate decay, and Nesterov momentum.

Parameters
  • lr – learning rate

  • momentum – momentum factor

  • weight_decay – value to apply to the activation function

  • nesterov – whether to apply Nesterov momentum

Returns

SGD optimizer

Training and evaluation: coarse methods

pyeddl.eddl.fit(m, in_, out, batch, epochs)

Train the model for a fixed number of epochs (iterations on a dataset).

Parameters
  • m – model to train

  • in – input data (features)

  • out – output data (labels)

  • batch – number of samples per gradient update

  • epochs – number of training epochs. An epoch is an iteration over the entire data provided

Returns

None

pyeddl.eddl.evaluate(m, in_, out, bs=- 1)

Compute the loss and metric values for the model in test mode.

Parameters
  • m – model to train

  • in – input data (features)

  • out – output data (labels)

  • bs – batch size

Returns

None

Training and evaluation: finer methods

pyeddl.eddl.random_indices(batch_size, num_samples)

Generate a random sequence of indices for a batch.

Parameters
  • batch_size – length of the random sequence to generate

  • num_samples – number of samples available, i.e., maximum value to include in the random sequence + 1

Returns

list of integers

pyeddl.eddl.train_batch(net, in_, out, indices=None)

Train the model using the samples of the input list that are on the selected indices list.

Parameters
  • net – model to train

  • in – list of samples

  • out – list of labels or expected output

  • indices – list of indices of the samples to train

Returns

None

pyeddl.eddl.eval_batch(net, in_, out, indices=None)

Evaluate the model using the samples of the input list that are on the selected indices list.

Parameters
  • net – model to evaluate

  • in – list of samples

  • out – list of labels or expected output

  • indices – list of indices of the samples to train

Returns

None

pyeddl.eddl.next_batch(in_, out)

Load the next batch of random samples from the input list to the output list.

Parameters
  • in – list from where the samples of the next batch should be chosen from

  • out – list where the samples of the next batch should be stored

Returns

None

Training and evaluation: finest methods

pyeddl.eddl.set_mode(net, mode)

Set model mode.

Parameters
  • net – model

  • mode – 1 for training, 0 for test

Returns

None

pyeddl.eddl.reset_loss(m)

Reset model loss.

Parameters

m – model

Returns

None

pyeddl.eddl.forward(m, in_=None, b=None)

Compute the gradient of the model through the forward graph

Parameters
  • m – model

  • in – list of layers or tensors

  • b – batch size to resize the model to

Returns

list of layers

pyeddl.eddl.zeroGrads(m)

Set model gradients to zero.

Parameters

net – model

Returns

None

pyeddl.eddl.backward(m, target=None)

Calculate the gradient by passing its argument (1x1 unit tensor by default) through the backward graph.

Parameters
  • m – model or loss (if it’s a loss then target must not be provided)

  • target – list of tensors

Returns

None

pyeddl.eddl.update(m)

Update the model weights.

Parameters

m – Model

Returns

None

pyeddl.eddl.print_loss(m, batch)

Print model loss at the given batch.

Parameters
  • m – model

  • batch – batch number

Returns

None

pyeddl.eddl.get_losses(m)

Get model losses.

Parameters

m – model

Returns

list of float

pyeddl.eddl.get_metrics(m)

Get model metrics.

Parameters

m – model

Returns

list of float

Constraints

pyeddl.eddl.clamp(m, min, max)

Perform model parameter clamping between min and max.

Parameters
  • m – model

  • min – minimum value

  • max – maximum value

Returns

None

pyeddl.eddl.compute_loss(L)

Compute the loss of the associated model.

Parameters

L – loss object

Returns

computed loss

Losses and metrics

pyeddl.eddl.compute_metric(L)

Compute the loss of the associated model (alias for compute_loss).

Parameters

L – loss object

Returns

computed loss

pyeddl.eddl.getLoss(type_)

Get loss by name.

Parameters

type – loss name

Returns

loss

pyeddl.eddl.newloss(f, in_, name)

Create a new loss.

f can be a Layer -> Layer function (and in_ must be a Layer) or a [Layer] -> Layer function (and in_ must be a [Layer]).

Parameters
  • f – loss function

  • in – loss input

  • name – loss name

Returns

loss

pyeddl.eddl.getMetric(type_)

Get Metric by name.

Parameters

type – Metric name

Returns

metric

pyeddl.eddl.detach(l)

Set a layer as detached, excluding it from gradient computation.

Parameters

l – layer or list of layers to detach

Returns

detached layer(s)

pyeddl.eddl.show_profile()

Show profile information.

Returns

None

pyeddl.eddl.reset_profile()

Reset profile information.

Returns

None

Layers

Core layers

pyeddl.eddl.Activation(parent, activation, params=[], name='')

Apply an activation function to the given layer.

Parameters
  • parent – parent layer

  • activation – name of the activation function

  • params – list of floats (parameters of the activation function)

  • name – name of the output layer

Returns

Activation layer

pyeddl.eddl.Softmax(parent, axis=- 1, name='')

Apply a softmax activation function to the given layer.

Parameters
  • parent – parent layer

  • axis – dimension on which to operate (-1 for last axis)

  • name – name of the output layer

Returns

softmax Activation layer

pyeddl.eddl.Sigmoid(parent, name='')

Apply a sigmoid activation function to the given layer.

Parameters
  • parent – parent layer

  • name – name of the output layer

Returns

sigmoid Activation layer

pyeddl.eddl.HardSigmoid(parent, name='')

Apply a hard sigmoid activation function to the given layer.

Parameters
  • parent – parent layer

  • name – name of the output layer

Returns

hard sigmoid Activation layer

pyeddl.eddl.ReLu(parent, name='')

Apply a rectified linear unit activation function to the given layer.

Parameters
  • parent – parent layer

  • name – name of the output layer

Returns

ReLu Activation layer

pyeddl.eddl.ThresholdedReLu(parent, alpha=1.0, name='')

Apply the thresholded version of a rectified linear unit activation function to the given layer.

Parameters
  • parent – parent layer

  • name – name of the output layer

Returns

thresholded ReLu Activation layer

pyeddl.eddl.LeakyReLu(parent, alpha=0.01, name='')

Apply the leaky version of a rectified linear unit activation function to the given layer.

Parameters
  • parent – parent layer

  • name – name of the output layer

Returns

leaky ReLu Activation layer

pyeddl.eddl.Elu(parent, alpha=1.0, name='')

Apply the exponential linear unit activation function to the given layer.

Parameters
  • parent – parent layer

  • alpha – ELU coefficient

  • name – name of the output layer

Returns

ELU Activation layer

pyeddl.eddl.Selu(parent, name='')

Apply the scaled exponential linear unit activation function to the given layer.

Parameters
  • parent – parent layer

  • name – name of the output layer

Returns

SELU Activation layer

pyeddl.eddl.Exponential(parent, name='')

Apply the exponential (base e) activation function to the given layer.

Parameters
  • parent – parent layer

  • name – name of the output layer

Returns

exponential Activation layer

pyeddl.eddl.Softplus(parent, name='')

Apply the softplus activation function to the given layer.

Parameters
  • parent – parent layer

  • name – name of the output layer

Returns

softplus Activation layer

pyeddl.eddl.Softsign(parent, name='')

Apply the softsign activation function to the given layer.

Parameters
  • parent – parent layer

  • name – name of the output layer

Returns

softsign Activation layer

pyeddl.eddl.Linear(parent, alpha=1.0, name='')

Apply the linear activation function to the given layer.

Parameters
  • parent – parent layer

  • alpha – linear coefficient

  • name – name of the output layer

Returns

linear Activation layer

pyeddl.eddl.Tanh(parent, name='')

Apply the hyperbolic tangent activation function to the given layer.

Parameters
  • parent – parent layer

  • name – name of the output layer

Returns

hyperbolic tangent Activation layer

pyeddl.eddl.Conv(parent, filters, kernel_size, strides=[1, 1], padding='same', use_bias=True, groups=1, dilation_rate=[1, 1], name='')

2D convolution layer.

Parameters
  • parent – parent layer

  • filters – dimensionality of the output space (i.e., the number of output filters in the convolution)

  • kernel_size – list of 2 integers, specifying the height and width of the 2D convolution window.

  • strides – list of 2 integers, specifying the strides of the convolution along the height and width

  • padding – one of “none”, “valid” or “same”

  • use_bias – whether the layer uses a bias vector

  • groups – number of blocked connections from input to output channels

  • dilation_rate – list of 2 integers, specifying the dilation rate to use for dilated convolution

  • name – name of the output layer

Returns

Convolution layer

pyeddl.eddl.Conv1D(parent, filters, kernel_size, strides=[1], padding='same', use_bias=True, groups=1, dilation_rate=[1], name='')

1D convolution layer.

Parameters
  • parent – parent layer

  • filters – dimensionality of the output space (i.e., the number of output filters in the convolution)

  • kernel_size – list of 1 integer.

  • strides – list of 1 integers

  • padding – one of “none”, “valid” or “same”

  • use_bias – whether the layer uses a bias vector

  • groups – number of blocked connections from input to output channels

  • dilation_rate – list of 1 integer, specifying the dilation rate to use for dilated convolution

  • name – name of the output layer

Returns

Convolution layer

pyeddl.eddl.Conv2D(parent, filters, kernel_size, strides=[1, 1], padding='same', use_bias=True, groups=1, dilation_rate=[1, 1], name='')

2D convolution layer.

Parameters
  • parent – parent layer

  • filters – dimensionality of the output space (i.e., the number of output filters in the convolution)

  • kernel_size – list of 2 integers, specifying the height and width of the 2D convolution window.

  • strides – list of 2 integers, specifying the strides of the convolution along the height and width

  • padding – one of “none”, “valid” or “same”

  • use_bias – whether the layer uses a bias vector

  • groups – number of blocked connections from input to output channels

  • dilation_rate – list of 2 integers, specifying the dilation rate to use for dilated convolution

  • name – name of the output layer

Returns

Convolution layer

pyeddl.eddl.Conv3D(parent, filters, kernel_size, strides=[1, 1, 1], padding='same', use_bias=True, groups=1, dilation_rate=[1, 1, 1], name='')

3D convolution layer.

Parameters
  • parent – parent layer

  • filters – dimensionality of the output space (i.e., the number of output filters in the convolution)

  • kernel_size – list of 3 integers, specifying the sizes of the 3D convolution window along each dimension

  • strides – list of 3 integers, specifying the strides of the convolution along each dimension

  • padding – one of “none”, “valid” or “same”

  • use_bias – whether the layer uses a bias vector

  • groups – number of blocked connections from input to output channels

  • dilation_rate – list of 3 integers, specifying the dilation rate to use for dilated convolution

  • name – name of the output layer

Returns

Convolution layer

pyeddl.eddl.PointwiseConv(parent, filters, strides=[1, 1], use_bias=True, groups=1, dilation_rate=[1, 1], name='')

Pointwise convolution layer.

Parameters
  • parent – parent layer

  • filters – dimensionality of the output space (i.e., the number of output filters in the convolution)

  • strides – list of 2 integers, specifying the strides of the convolution along the height and width

  • use_bias – whether the layer uses a bias vector

  • groups – number of blocked connections from input to output channels

  • dilation_rate – list of 2 integers, specifying the dilation rate to use for dilated convolution

  • name – name of the output layer

Returns

Convolution layer

pyeddl.eddl.PointwiseConv2D(parent, filters, strides=[1, 1], use_bias=True, groups=1, dilation_rate=[1, 1], name='')

2D Pointwise convolution layer.

Parameters
  • parent – parent layer

  • filters – dimensionality of the output space (i.e., the number of output filters in the convolution)

  • strides – list of 2 integers, specifying the strides of the convolution along the height and width

  • use_bias – whether the layer uses a bias vector

  • groups – number of blocked connections from input to output channels

  • dilation_rate – list of 2 integers, specifying the dilation rate to use for dilated convolution

  • name – name of the output layer

Returns

Convolution layer

pyeddl.eddl.DepthwiseConv2D(parent, kernel_size, strides=[1, 1], padding='same', use_bias=True, dilation_rate=[1, 1], name='')

2D Depthwise convolution layer.

Parameters
  • parent – parent layer

  • kernel_size – height and width of the convolution window

  • strides – list of 2 integers, specifying the strides of the convolution along the height and width

  • use_bias – whether the layer uses a bias vector

  • dilation_rate – list of 2 integers, specifying the dilation rate to use for dilated convolution

  • name – name of the output layer

Returns

Convolution layer

pyeddl.eddl.Dense(parent, ndim, use_bias=True, name='')

Regular densely-connected layer.

Parameters
  • parent – parent layer

  • ndim – dimensionality of the output space

  • use_bias – whether the layer uses a bias vector

  • name – name of the output layer

Returns

Dense layer

pyeddl.eddl.Dropout(parent, rate, iw=True, name='')

Apply dropout to a layer.

The dropout consists of randomly setting a fraction of input units to 0 at each update during training time, which helps prevent overfitting.

Parameters
  • parent – parent layer

  • rate – fraction of input units to drop (between 0 and 1)

  • iw – whether to perform weighting in inference

  • name – name of the output layer

Returns

Dropout layer

pyeddl.eddl.Input(shape, name='')

Create a layer that can be used as input to a model.

Parameters
  • shape – list of dimensions, not including the batch size. For instance, shape=[32] indicates that the expected input will be batches of 32-dimensional vectors

  • name – name of the output layer

Returns

Input layer

pyeddl.eddl.UpSampling(parent, size, interpolation='nearest', name='')

2D upsampling layer.

Identical to the scale transformation. Alias of Resize.

Parameters
  • parent – parent layer

  • size – list of 2 integers (upsampling factors for rows and columns)

  • interpolation – (deprecated) only “nearest” is valid

  • name – name of the output layer

Returns

UpSampling layer

pyeddl.eddl.UpSampling2D(parent, size, interpolation='nearest', name='')

2D upsampling layer.

Identical to the scale transformation. Alias of Resize.

Parameters
  • parent – parent layer

  • size – list of 2 integers (upsampling factors for rows and columns)

  • interpolation – (deprecated) only “nearest” is valid

  • name – name of the output layer

Returns

UpSampling layer

pyeddl.eddl.UpSampling3D(parent, new_shape, reshape=True, da_mode='constant', constant=0.0, coordinate_transformation_mode='asymmetric', name='')

3D upsampling layer.

Parameters
  • parent – parent layer

  • new_shape – new shape

  • reshape – if True, the output shape will be new_shape (classical scale; recommended). If False, the output shape will be the input shape (scale < 100%: scale + padding; scale > 100%: crop + scale)

  • da_mode – one of “nearest”, “constant”

  • constant – fill value for the area outside the resized image, used for all channels

  • coordinate_transformation_mode – how to transform the coordinates in the resized tensor to the coordinates in the original tensor

  • name – name of the output layer

Returns

UpSampling3D layer

pyeddl.eddl.Resize(parent, new_shape, reshape=True, da_mode='constant', constant=0.0, coordinate_transformation_mode='asymmetric', name='')

Resize an image layer to the given size as [height, width].

Same Scale, but with the backward operation supported.

Parameters
  • parent – parent layer

  • new_shape – new shape

  • reshape – if True, the output shape will be new_shape (classical scale; recommended). If False, the output shape will be the input shape (scale < 100%: scale + padding; scale > 100%: crop + scale)

  • da_mode – one of “nearest”, “constant”

  • constant – fill value for the area outside the resized image, used for all channels

  • coordinate_transformation_mode – how to transform the coordinates in the resized tensor to the coordinates in the original tensor

  • name – name of the output layer

Returns

Resize layer

pyeddl.eddl.Reshape(parent, shape, name='')

Reshape an output to the given shape.

Parameters
  • parent – parent layer

  • shape – target shape as a list of integers, not including the batch axis

  • name – name of the output layer

Returns

Reshape layer

pyeddl.eddl.Transform(parent, copy_cpu_to_fpga, copy_fpga_to_cpu, transform, mode, name='')

Transform an input to an output format.

Parameters
  • parent – parent layer

  • mode – 0 = CHW to GHWC; 1 = GHWC to CHW

  • name – name of the output layer

Returns

Transform layer

pyeddl.eddl.Flatten(parent, name='')

Flatten the input. Does not affect the batch size.

Equivalent to a Reshape() with a shape of [-1].

Parameters
  • parent – parent layer

  • name – name of the output layer

Returns

Flatten layer

pyeddl.eddl.Repeat(parent, repeats, axis, name='')

Repeat the elements of the output tensor along the specified dimension.

Parameters
  • parent – parent layer

  • repeats – number of repetitions (integer or list of integers)

  • axis – axis along which to repeat values (batch is ignored)

  • name – name of the output layer

Returns

Repeat layer

pyeddl.eddl.Tile(parent, repeats, name='')

Construct a tensor by repeating the elements of input.

The repeats argument specifies the number of repetitions in each dimension.

Parameters
  • parent – parent layer

  • repeats – (list of integers) number of repetitions per dimension

  • name – name of the output layer

Returns

Tile layer

pyeddl.eddl.Broadcast(parent1, parent2, name='')

Prepare the output of the smaller layer to be broadcasted into the bigger one (parent1 or parent2).

Example:

f(P1(3), P2(4,2,3,5)) => P1 is x.  (P2 has no delta)
f(P1(4,2,3,5), P2(3)) => P2 is x.  (P1 has no delta)
Parameters
  • parent1 – parent layer

  • parent2 – parent layer

  • name – name of the output layer

Returns

Broadcast layer

pyeddl.eddl.Bypass(parent, bypass_name='', name='')

Propagate the output of the parent.

No-op layer, used internally for ONNX.

Parameters
  • parent – parent layer

  • bypass_name – name of the layer to bypass

  • name – name of the output layer

Returns

Bypass layer

pyeddl.eddl.Shape(parent, include_batch=True, name='')

Return the shape of the parent as the output.

Parameters
  • parent – parent layer

  • include_batch – If True, the batch dimension is included in the output

  • name – name of the output layer

Returns

Shape layer

pyeddl.eddl.Squeeze(parent, axis=- 1, name='')

Dimension of size one is removed at the specified position (batch dimension is ignored).

Parameters
  • parent – parent layer

  • axis – squeeze only along this dimension (default: -1, squeeze along all dimensions)

  • name – name of the output layer

Returns

Squeeze layer

pyeddl.eddl.Unsqueeze(parent, axis=0, name='')

Dimension of size one is inserted at the specified position (batch dimension is ignored).

Parameters
  • parent – parent layer

  • axis – unsqueeze only along this dimension

  • name – name of the output layer

Returns

Unsqueeze layer

pyeddl.eddl.ConvT2D(parent, filters, kernel_size, strides=[1, 1], padding='same', use_bias=True, groups=1, dilation_rate=[1, 1], name='')

2D Transposed convolution layer (sometimes called deconvolution).

The need for transposed convolutions generally arises from the desire to use a transformation going in the opposite direction of a normal convolution, i.e., from something that has the shape of the output of some convolution to something that has the shape of its input while maintaining a connectivity pattern that is compatible with said convolution.

Parameters
  • parent – parent layer

  • filters – dimensionality of the output space (i.e., the number of output filters in the convolution)

  • kernel_size – the height and width of the 2D convolution window

  • strides – the strides of the convolution along the height and width

  • padding – one of “valid” or “same”

  • use_bias – whether the layer uses a bias vector

  • dilation_rate – the dilation rate to use for dilated convolution. Spacing between kernel elements

  • name – name of the output layer

Returns

ConvT2D layer

pyeddl.eddl.ConvT3D(parent, filters, kernel_size, strides=[1, 1, 1], padding='same', use_bias=True, groups=1, dilation_rate=[1, 1, 1], name='')

3D Transposed convolution layer (sometimes called deconvolution).

The need for transposed convolutions generally arises from the desire to use a transformation going in the opposite direction of a normal convolution, i.e., from something that has the shape of the output of some convolution to something that has the shape of its input while maintaining a connectivity pattern that is compatible with said convolution.

Parameters
  • parent – parent layer

  • filters – dimensionality of the output space (i.e., the number of output filters in the convolution)

  • kernel_size – the depth, height and width of the 3D convolution window

  • strides – the strides of the convolution along the depth, height and width dimensions

  • padding – one of “valid” or “same”

  • use_bias – whether the layer uses a bias vector

  • dilation_rate – the dilation rate to use for dilated convolution. Spacing between kernel elements

  • name – name of the output layer

Returns

ConvT3D layer

pyeddl.eddl.Embedding(parent, vocsize, length, output_dim, mask_zeros=False, name='')

Turn positive integers (indexes) into dense vectors of fixed size. e.g., [[4], [20]] -> [[0.25, 0.1], [0.6, -0.2]].

Parameters
  • parent – parent layer

  • vocsize – size of the vocabulary, i.e., maximum integer index + 1

  • length – length of the sequence, to connect to Dense layers (non Recurrent)

  • output_dim – dimension of the dense embedding

  • name – name of the output layer

Returns

Embedding layer

pyeddl.eddl.Transpose(parent, name='')

Transpose a Layer.

Parameters
  • parent – parent layer

  • name – name of the output layer

Returns

the transposed layer

pyeddl.eddl.ConstOfTensor(t, name='')

Repeat tensor for each batch.

Given a tensor (constant), this layer outputs the same tensor but repeated for each batch.

Parameters
  • t – a tensor

  • name – name of the output layer

Returns

a ConstOfTensor layer

pyeddl.eddl.Where(parent1, parent2, condition, name='')

Transpose a Layer.

Parameters
  • parent1 – parent layer

  • parent2 – parent layer

  • condition – layer that selects parent1 where True, parent2 where False (0.0 for False and 1.0 for True)

  • name – name of the output layer

Returns

a Where layer

Transformation layers

pyeddl.eddl.Crop(parent, from_coords, to_coords, reshape=True, constant=0.0, name='')

Crop the given image layer at [(top, left), (bottom, right)].

Parameters
  • parent – parent layer

  • from_coords – [top, left] coordinates

  • to_coords – [bottom, right] coordinates

  • reshape – if True, the output shape will be new_shape (classical scale; recommended). If False, the output shape will be the input shape (scale < 100%: scale + padding; scale > 100%: crop + scale)

  • constant – erasing value

  • name – name of the output layer

Returns

Crop layer

pyeddl.eddl.CenteredCrop(parent, size, reshape=True, constant=0.0, name='')

Crop the given image layer at the center with size [width, height].

Parameters
  • parent – parent layer

  • size – [height, width]

  • reshape – If True, the output shape will be new_shape (classical scale; recommended). If False, the output shape will be the input shape (scale < 100%: scale + padding; scale > 100%: crop + scale)

  • constant – erasing value

  • name – name of the output layer

Returns

a Crop layer

pyeddl.eddl.CropScale(parent, from_coords, to_coords, da_mode='constant', constant=0.0, name='')

Crop the given image layer at [(top, left), (bottom, right)] and scale it to the parent size.

Parameters
  • parent – parent layer

  • from_coords – [top, left] coordinates

  • to_coords – [bottom, right] coordinates

  • da_mode – one of “nearest”, “constant”

  • constant – fill value for the area outside the rotated image, used for all channels

  • name – name of the output layer

Returns

CropScale layer

pyeddl.eddl.Cutout(parent, from_coords, to_coords, constant=0.0, name='')

Select a rectangle in an image layer at [(top, left), (bottom, right)] and erase its pixels using a constant value.

Parameters
  • parent – parent layer

  • from_coords – [top, left] coordinates

  • to_coords – [bottom, right] coordinates

  • constant – erasing value

  • name – name of the output layer

Returns

Cutout layer

pyeddl.eddl.Flip(parent, axis=0, name='')

Flip an image layer at the given axis.

Parameters
  • parent – parent layer

  • axis – flip axis

  • name – name of the output layer

Returns

Flip layer

pyeddl.eddl.HorizontalFlip(parent, name='')

Flip an image layer horizontally.

Parameters
  • parent – parent layer

  • name – name of the output layer

Returns

a Flip layer

pyeddl.eddl.Pad(parent, padding, constant=0.0, name='')

Pad an image on all sides.

Parameters
  • parent – parent layer

  • padding – padding on each border, (top-bottom, left-right) or (top, right, bottom, left)

  • constant – pad with a constant value

  • name – name of the output layer

Returns

a Pad layer

pyeddl.eddl.Rotate(parent, angle, offset_center=[0, 0], da_mode='original', constant=0.0, name='')

Rotate an image layer by the given angle, counterclockwise.

Parameters
  • parent – parent layer

  • angle – rotation angle in degrees

  • offset_center – center of rotation

  • da_mode – one of “nearest”, “constant”

  • constant – fill value for the area outside the rotated image, used for all channels

  • name – name of the output layer

Returns

Rotate layer

pyeddl.eddl.Scale(parent, new_shape, reshape=True, da_mode='constant', constant=0.0, coordinate_transformation_mode='asymmetric', name='')

Resize an image layer to the given size as [height, width].

Parameters
  • parent – parent layer

  • new_shape – new shape

  • reshape – if True, the output shape will be new_shape (classical scale; recommended). If False, the output shape will be the input shape (scale < 100%: scale + padding; scale > 100%: crop + scale)

  • da_mode – one of “nearest”, “constant”

  • constant – fill value for the area outside the resized image, used for all channels

  • coordinate_transformation_mode – how to transform the coordinates in the resized tensor to the coordinates in the original tensor

  • name – name of the output layer

Returns

Scale layer

pyeddl.eddl.Shift(parent, shift, da_mode='nearest', constant=0.0, name='')

Shift the input image.

Parameters
  • parent – parent layer

  • shift – list of maximum absolute fraction for the horizontal and vertical translations

  • da_mode – one of “nearest”, “constant”

  • constant – fill value for the area outside the resized image, used for all channels

Returns

Shift layer

pyeddl.eddl.VerticalFlip(parent, name='')

Flip an image layer vertically.

Parameters
  • parent – parent layer

  • name – name of the output layer

Returns

a Flip layer

Data augmentation layers

pyeddl.eddl.RandomCrop(parent, new_shape, name='')

Crop an image layer at a random location with size [height, width].

Parameters
  • parent – parent layer

  • new_shape – [height, width] size

  • name – name of the output layer

Returns

CropRandom layer

pyeddl.eddl.RandomCropScale(parent, factor, da_mode='nearest', name='')

Crop an image layer randomly and scale it to the parent size.

Parameters
  • parent – parent layer

  • factor – crop range factor

  • da_mode – one of “nearest”, “constant”

  • name – name of the output layer

Returns

CropScaleRandom layer

pyeddl.eddl.RandomCutout(parent, factor_x, factor_y, constant=0.0, name='')

Randomly select a rectangle region in an image layer and erase its pixels.

The random region is defined by the range [(min_x, max_x), (min_y, max_y)] (relative values).

Parameters
  • parent – parent layer

  • factor_x – list of factors for horizontal size

  • factor_y – list of factors for vertical size

  • constant – erasing value

  • name – name of the output layer

Returns

CutoutRandom layer

pyeddl.eddl.RandomFlip(parent, axis, name='')

Flip an image layer at the given axis randomly.

Parameters
  • parent – parent layer

  • axis – flip axis

  • name – name of the output layer

Returns

FlipRandom layer

pyeddl.eddl.RandomHorizontalFlip(parent, name='')

Flip an image layer horizontally, randomly.

Parameters
  • parent – parent layer

  • name – name of the output layer

Returns

a FlipRandom layer

pyeddl.eddl.RandomRotation(parent, factor, offset_center=[0, 0], da_mode='original', constant=0.0, name='')

Rotate an image layer randomly.

Parameters
  • parent – parent layer

  • factor – angle range in degrees (counterclockwise)

  • offset_center – center of rotation

  • da_mode – one of “original”

  • constant – fill value for the area outside the rotated image, used for all channels.

  • name – name of the output layer

Returns

RotateRandom layer

pyeddl.eddl.RandomScale(parent, factor, da_mode='nearest', constant=0.0, coordinate_transformation_mode='asymmetric', name='')

Resize an image layer randomly.

Parameters
  • parent – parent layer

  • factor – list of resize factors for the new shape

  • da_mode – One of “nearest”

  • constant – fill value for the area outside the resized image, used for all channels

  • coordinate_transformation_mode – how to transform the coordinates in the resized tensor to the coordinates in the original tensor

  • name – name of the output layer

Returns

ScaleRandom layer

pyeddl.eddl.RandomShift(parent, factor_x, factor_y, da_mode='nearest', constant=0.0, name='')

Shift an image layer randomly.

Parameters
  • parent – parent layer

  • factor_x – list of factors for horizontal translations

  • factor_y – list of factors for vertical translations

  • da_mode – one of “nearest”, “constant”

  • constant – fill value for the area outside the resized image, used for all channels

  • name – name of the output layer

Returns

ShiftRandom layer

pyeddl.eddl.RandomVerticalFlip(parent, name='')

Flip an image layer vertically, randomly.

Parameters
  • parent – parent layer

  • name – name of the output layer

Returns

a RandomFlip layer

Merge layers

pyeddl.eddl.Add(layers, name='')

Add input layers.

NOTE: this function can also be used to compute the sum of two layers or floats (which was previously done via the now deprecated function Sum). In this case, the function is called as Add(l1, l2), where each of the two parameters can be a layer or a float (one of them must be a layer).

Parameters
  • layers – list of layers, all of the same shape

  • name – name of the output layer

Returns

Add layer

pyeddl.eddl.Average(layers, name='')

Compute the average of a list of input layers.

Parameters
  • layers – list of layers, all of the same shape

  • name – name of the output layer

Returns

Average layer

pyeddl.eddl.Concat(layers, axis=0, name='')

Concatenate input layers.

Parameters
  • layers – list of layers

  • axis – axis along which to concatenate

  • name – name of the output layer

Returns

Concat layer

pyeddl.eddl.MatMul(layers, name='')
pyeddl.eddl.Maximum(layers, name='')

Compute the maximum (element-wise) of a list of input layers.

Parameters
  • layers – list of layers, all of the same shape

  • name – name of the output layer

Returns

Maximum layer

pyeddl.eddl.Minimum(layers, name='')

Compute the minimum (element-wise) of a list of input layers.

Parameters
  • layers – list of layers, all of the same shape

  • name – name of the output layer

Returns

Minimum layer

pyeddl.eddl.Subtract(layers, name='')

Subtract two input layers.

Parameters
  • layers – list of two layers with the same shape

  • name – name of the output layer

Returns

Substract layer

Noise layers

pyeddl.eddl.GaussianNoise(parent, stddev, name='')

Apply additive zero-centered Gaussian noise.

This is useful to mitigate overfitting (can be considered a form of random data augmentation). Gaussian Noise (GS) is a natural choice as corruption process for real valued inputs. Being a regularization layer, it is only active at training time.

Parameters
  • parent – parent layer

  • stddev – standard deviation of the noise distribution

  • name – name of the output layer

Returns

GaussianNoise layer

Normalization layers

pyeddl.eddl.BatchNormalization(parent, affine, momentum=0.9, epsilon=1e-05, name='')

Batch normalization layer.

Normalize the activations of the input layer at each batch, i.e., apply a transformation that maintains the mean activation close to 0 and the activation standard deviation close to 1. See: https://arxiv.org/abs/1502.03167

Parameters
  • parent – parent layer

  • affine – if True, this module has learnable affine parameters

  • momentum – momentum for the moving mean and the moving variance

  • epsilon – small float added to variance to avoid dividing by zero

  • name – name of the output layer

Returns

BatchNorm layer

pyeddl.eddl.LayerNormalization(parent, affine=True, epsilon=1e-05, name='')

Layer normalization layer.

See: https://arxiv.org/abs/1607.06450.

Parameters
  • parent – parent layer

  • affine – if True, this module has learnable affine parameters

  • momentum – momentum for the moving mean and the moving variance

  • epsilon – value added to the denominator for numerical stability

  • name – name of the output layer

Returns

LayerNorm layer

pyeddl.eddl.GroupNormalization(parent, groups, epsilon=0.001, affine=True, name='')

Group normalization layer.

Divide the channels into groups and compute within each group the mean and variance for normalization. The computation is independent of batch sizes. See: https://arxiv.org/abs/1803.08494.

Parameters
  • parent – parent layer

  • groups – number of groups in which the channels will be divided

  • momentum – momentum for the moving mean and the moving variance

  • epsilon – value added to the denominator for numerical stability

  • affine – if True, this module has learnable affine parameters

  • name – name of the output layer

Returns

GroupNorm layer

pyeddl.eddl.Norm(parent, epsilon=0.001, name='')
pyeddl.eddl.NormMax(parent, epsilon=0.001, name='')
pyeddl.eddl.NormMinMax(parent, epsilon=0.001, name='')

Operator layers

pyeddl.eddl.Abs(l)

Compute the element-wise absolute value of the given input layer.

Parameters

l – parent layer

Returns

Abs layer

pyeddl.eddl.Sub(l1, l2)

Compute the difference between two layers or floats.

Parameters
  • l1 – a layer or float

  • l2 – a layer or float

Returns

Sub layer

pyeddl.eddl.Diff(l1, l2)

Compute the difference between two layers or floats.

Deprecated alias for Sub.

Parameters
  • l1 – a layer or float

  • l2 – a layer or float

Returns

Diff layer

pyeddl.eddl.Div(l1, l2)

Compute the element-wise division of two layers or floats.

Parameters
  • l1 – a layer or float

  • l2 – a layer or float

Returns

Div layer

pyeddl.eddl.Exp(l)

Compute the element-wise exponential of the input layer.

Parameters

l – parent layer

Returns

Exp layer

pyeddl.eddl.Log(l)

Compute the logarithm of the input layer.

Parameters

l – parent layer

Returns

Log layer

pyeddl.eddl.Log2(l)

Compute the base 2 logarithm of the input layer.

Parameters

l – parent layer

Returns

Log2 layer

pyeddl.eddl.Log10(l)

Compute the base 10 logarithm of the input layer.

Parameters

l – parent layer

Returns

Log10 layer

pyeddl.eddl.Clamp(parent, min, max, name='')

Clamp all elements in input into the range [min, max].

Parameters
  • l – parent layer

  • min – lower bound of the range to be clamped to

  • max – upper bound of the range to be clamped to

Returns

Clamp layer

pyeddl.eddl.Clip(parent, min, max, name='')

Clamp all elements in input into the range [min, max].

Alias for Clamp.

Parameters
  • l – parent layer

  • min – lower bound of the range to be clamped to

  • max – upper bound of the range to be clamped to

Returns

Clamp layer

pyeddl.eddl.Mult(l1, l2)

Compute the element-wise multiplication of two layers or floats.

Parameters
  • l1 – a layer or float

  • l2 – a layer or float

Returns

Mult layer

pyeddl.eddl.Pow(l1, k)

Layer that computes the power of a layer raised to a float number.

Parameters
  • l1 – a layer

  • l2 – a float

Returns

Pow layer

pyeddl.eddl.Sqrt(l)

Compute the square root of a layer.

Parameters

l – parent layer

Returns

Sqrt layer

pyeddl.eddl.Sum(l1, l2)

Compute the sum of two layers or floats.

Deprecated alias for Add (in the add-two-layers-or-floats version).

Parameters
  • l1 – a layer or float

  • l2 – a layer or float

Returns

Sum layer

pyeddl.eddl.Select(l, indices, name='')

Create a new layer which indexes the input layer using the entries in indices.

Parameters
  • l – parent layer

  • indices – list of indices to be selected

  • name – name of the output layer

Returns

Select layer

pyeddl.eddl.Slice(l, indices, name='')

Alias for Select.

pyeddl.eddl.Expand(l, size, name='')

Expand singleton dimensions.

Parameters
  • l – parent layer

  • size – target size for dimension expansion

  • name – name of the output layer

Returns

Expand layer

pyeddl.eddl.Split(l, indexes, axis=- 1, merge_sublayers=False, name='')

Split layer into a list of tensor layers.

Parameters
  • l – parent layer

  • indexes[20, 60] => [:20, 20:60, 60:]

  • axis – which axis to split on (-1 = last)

  • name – name of the output layer

Merge_sublayers

merge layers symbolically (affects plotting)

Returns

vector of layers

pyeddl.eddl.Permute(l, dims, name='')

Permute the dimensions of the input according to a given pattern.

Parameters
  • l – parent layer

  • dims – permutation pattern, does not include the samples dimension

  • name – name of the output layer

Returns

Permute layer

Reduction layers

pyeddl.eddl.ReduceMean(l, axis, keepdims=False)
pyeddl.eddl.ReduceVar(l, axis, keepdims=False)
pyeddl.eddl.ReduceSum(l, axis, keepdims=False)
pyeddl.eddl.ReduceMax(l, axis, keepdims=False)
pyeddl.eddl.ReduceMin(l, axis, keepdims=False)
pyeddl.eddl.ReduceArgMax(l, axis, keepdims=False)

Generator layers

pyeddl.eddl.GaussGenerator(mean, stdev, size)
pyeddl.eddl.UniformGenerator(low, high, size)

Pooling layers

pyeddl.eddl.AveragePool(parent, pool_size=[2, 2], strides=[2, 2], padding='none', name='')

Perform average pooling.

Parameters
  • parent – parent layer

  • pool_size – size of the average pooling windows

  • strides – factors by which to downscale

  • padding – one of “none”, “valid” or “same”

  • name – name of the output layer

Returns

AveragePool layer

pyeddl.eddl.AvgPool(parent, pool_size=[2, 2], strides=[2, 2], padding='none', name='')

Alias for AveragePool.

pyeddl.eddl.AveragePool1D(parent, pool_size=[2], strides=[2], padding='none', name='')

Perform 1D average pooling.

Parameters
  • parent – parent layer

  • pool_size – size of the average pooling windows

  • strides – factors by which to downscale

  • padding – one of “none”, “valid” or “same”

  • name – name of the output layer

Returns

AveragePool1D layer

pyeddl.eddl.AvgPool1D(parent, pool_size=[2], strides=[2], padding='none', name='')

Alias for AveragePool1D.

pyeddl.eddl.AveragePool2D(parent, pool_size=[2, 2], strides=[2, 2], padding='none', name='')

Perform 2D average pooling.

Parameters
  • parent – parent layer

  • pool_size – size of the average pooling windows

  • strides – factors by which to downscale

  • padding – one of “none”, “valid” or “same”

  • name – name of the output layer

Returns

AveragePool2D layer

pyeddl.eddl.AvgPool2D(parent, pool_size=[2, 2], strides=[2, 2], padding='none', name='')

Alias for AveragePool2D.

pyeddl.eddl.AveragePool3D(parent, pool_size=[2, 2, 2], strides=[2, 2, 2], padding='none', name='')

Perform 3D average pooling.

Parameters
  • parent – parent layer

  • pool_size – size of the average pooling windows

  • strides – factors by which to downscale

  • padding – one of “none”, “valid” or “same”

  • name – name of the output layer

Returns

AveragePool3D layer

pyeddl.eddl.AvgPool3D(parent, pool_size=[2, 2, 2], strides=[2, 2, 2], padding='none', name='')

Alias for AveragePool3D.

pyeddl.eddl.GlobalMaxPool(parent, name='')

Perform global max pooling.

Parameters
  • parent – parent layer

  • name – name of the output layer

Returns

a MaxPool layer

pyeddl.eddl.GlobalMaxPool1D(parent, name='')

Perform 1D global max pooling.

Parameters
  • parent – parent layer

  • name – name of the output layer

Returns

a MaxPool layer

pyeddl.eddl.GlobalMaxPool2D(parent, name='')

Perform 2D global max pooling.

Parameters
  • parent – parent layer

  • name – name of the output layer

Returns

a MaxPool layer

pyeddl.eddl.GlobalMaxPool3D(parent, name='')

Perform 3D global max pooling.

Parameters
  • parent – parent layer

  • name – name of the output layer

Returns

a MaxPool layer

pyeddl.eddl.GlobalAveragePool(parent, name='')

Perform global average pooling.

Parameters
  • parent – parent layer

  • name – name of the output layer

Returns

an AveragePool layer

pyeddl.eddl.GlobalAvgPool(parent, name='')

Alias for GlobalAveragePool.

pyeddl.eddl.GlobalAveragePool1D(parent, name='')

Perform 1D global average pooling.

Parameters
  • parent – parent layer

  • name – name of the output layer

Returns

an AveragePool layer

pyeddl.eddl.GlobalAvgPool1D(parent, name='')

Alias for GlobalAveragePool1D.

pyeddl.eddl.GlobalAveragePool2D(parent, name='')

Perform 2D global average pooling.

Parameters
  • parent – parent layer

  • name – name of the output layer

Returns

an AveragePool layer

pyeddl.eddl.GlobalAvgPool2D(parent, name='')

Alias for GlobalAveragePool2D.

pyeddl.eddl.GlobalAveragePool3D(parent, name='')

Perform 3D global average pooling.

Parameters
  • parent – parent layer

  • name – name of the output layer

Returns

an AveragePool layer

pyeddl.eddl.GlobalAvgPool3D(parent, name='')

Alias for GlobalAveragePool3D.

pyeddl.eddl.MaxPool(parent, pool_size=[2, 2], strides=[2, 2], padding='none', name='')

Perform Max pooling.

Parameters
  • parent – parent layer

  • pool_size – size of the max pooling windows

  • strides – factor by which to downscale

  • padding – one of “none”, “valid” or “same”

  • name – name of the output layer

Returns

MaxPool layer

pyeddl.eddl.MaxPool1D(parent, pool_size=[2], strides=[2], padding='none', name='')

Perform 1D Max pooling.

Parameters
  • parent – parent layer

  • pool_size – size of the max pooling windows

  • strides – factor by which to downscale

  • padding – one of “none”, “valid” or “same”

  • name – name of the output layer

Returns

MaxPool1D layer

pyeddl.eddl.MaxPool2D(parent, pool_size=[2, 2], strides=[2, 2], padding='none', name='')

Perform 2D Max pooling.

Parameters
  • parent – parent layer

  • pool_size – size of the max pooling windows

  • strides – factor by which to downscale

  • padding – one of “none”, “valid” or “same”

  • name – name of the output layer

Returns

MaxPool layer

pyeddl.eddl.MaxPool3D(parent, pool_size=[2, 2, 2], strides=[2, 2, 2], padding='none', name='')

Perform 3D Max pooling.

Parameters
  • parent – parent layer

  • pool_size – size of the max pooling windows

  • strides – factor by which to downscale

  • padding – one of “none”, “valid” or “same”

  • name – name of the output layer

Returns

MaxPool layer

Recurrent layers

pyeddl.eddl.RNN(parent, units, activation='tanh', use_bias=True, bidirectional=False, name='')

Fully-connected RNN where the output is to be fed back to input.

Parameters
  • parent – parent layer

  • units – dimensionality of the output space.

  • activation – activation

  • use_bias – whether the layer uses a bias vector

  • bidirectional – whether the RNN is bidirectional

  • name – name of the output layer

Returns

RNN layer

pyeddl.eddl.LSTM(parent, units, mask_zeros=False, bidirectional=False, name='')

Long Short-Term Memory layer - Hochreiter 1997.

Parameters
  • parent – parent layer or vector of layers

  • units – dimensionality of the output space.

  • mask_zeros – boolean

  • bidirectional – whether the net is bidirectional or not

  • name – name of the output layer

Returns

LSTM layer

pyeddl.eddl.GRU(parent, units, mask_zeros=False, bidirectional=False, name='')

Gated Recurrent Unit (GRU).

Parameters
  • parent – parent layer or vector of layers

  • units – dimensionality of the output space.

  • mask_zeros – boolean

  • bidirectional – whether the net is bidirectional or not

  • name – name of the output layer

Returns

GRU layer

pyeddl.eddl.setDecoder(l)
pyeddl.eddl.GetStates(parent)

Utilities

pyeddl.eddl.setTrainable(net, lanme, val)
pyeddl.eddl.getOut(net)

Initializers

pyeddl.eddl.GlorotNormal(l, seed=1234)

Glorot normal initializer, also called Xavier normal initializer.

It draws samples from a truncated normal distribution centered on 0 with stddev = sqrt(2 / (fan_in + fan_out)) where fan_in is the number of input units in the weight tensor and fan_out is the number of output units in the weight tensor.

Parameters
  • l – parent layer to initialize

  • seed – used to seed the random generator

Returns

GlorotNormal layer

pyeddl.eddl.GlorotUniform(l, seed=1234)

Glorot uniform initializer, also called Xavier uniform initializer.

It draws samples from a uniform distribution within [-limit, limit] where limit is sqrt(6 / (fan_in + fan_out)), where fan_in is the number of input units in the weight tensor and fan_out is the number of output units in the weight tensor.

Parameters
  • l – parent layer to initialize

  • seed – used to seed the random generator

Returns

GlorotUniform layer

pyeddl.eddl.HeNormal(l, seed=1234)

He normal initializer.

It draws samples from a truncated normal distribution centered on 0 with stddev = sqrt(2 / (fan_in)) where fan_in is the number of input units in the weight tensor.

Parameters
  • l – parent layer to initialize

  • seed – used to seed the random generator

Returns

HeNormal layer

pyeddl.eddl.HeUniform(l, seed=1234)

He uniform initializer.

It draws samples from a uniform distribution within [-limit, limit] where limit is sqrt(6 / (fan_in )) where fan_in is the number of input units in the weight tensor.

Parameters
  • l – parent layer to initialize

  • seed – used to seed the random generator

Returns

HeUniform layer

pyeddl.eddl.RandomNormal(l, m=0.0, s=0.1, seed=1234)

Random normal initializer.

Parameters
  • l – parent layer to initialize

  • m – mean of the normal distribution to draw samples

  • s – standard deviation of the normal distribution to draw samples

  • seed – used to seed the random generator

Returns

RandomNormal layer

pyeddl.eddl.RandomUniform(l, min=0.0, max=0.1, seed=1234)

Random uniform initializer.

Parameters
  • l – parent layer to initialize

  • min – min of the distribution

  • max – max of the distribution

  • seed – used to seed the random generator

Returns

RandomUniform layer

pyeddl.eddl.Constant(l, v=0.1)

Initializer that generates tensors initialized to a constant value.

Parameters
  • l – parent layer to initialize

  • v – value of the generator

Returns

Constant layer

Regularizers

pyeddl.eddl.L2(l, l2)

Regularizer for L2 regularization.

Parameters
  • l – parent layer to regularize

  • l2 – L2 regularization factor

Returns

the input layer, regularized

pyeddl.eddl.L1(l, l1)

Regularizer for L1 regularization.

Parameters
  • l – parent layer to regularize

  • l1 – L1 regularization factor

Returns

the input layer, regularized

pyeddl.eddl.L1L2(l, l1, l2)

Regularizer for L1 and L2 regularization.

Parameters
  • l – parent layer to regularize

  • l1 – L1 regularization factor

  • l2 – L2 regularization factor

Returns

the input layer, regularized

Utils

pyeddl.eddl.get_topk_predictions(class_probs, class_names, k=5, decimals=2)

Get the top k class names along with their probabilities.

Parameters
  • class_probs – tensor with class probabilities (shapes: (n), (1, n) or (n, 1)) and rename the last layer as “top”.

  • class_names – class names as a list of strings

  • k – number of classes to return

  • decimals – number of decimal places for probability values

Returns

a string containing the top class names

Get Models

pyeddl.eddl.download_model(name, link)
pyeddl.eddl.download_vgg16(top=True, input_shape=[])

Download a VGG16 model pretrained with imagenet.

Parameters
  • top – If True, remove the densely connected part from the model and rename the last layer as “top”.

  • input_shape – new input shape for the model (do not specify the batch dimension)

Returns

a VGG16 model

pyeddl.eddl.download_vgg16_bn(top=True, input_shape=[])

Download a VGG16 model with BatchNormalization pretrained with imagenet.

Parameters
  • top – If True, remove the densely connected part from the model and rename the last layer as “top”.

  • input_shape – new input shape for the model (do not specify the batch dimension)

Returns

a VGG16 model with BatchNormalization

pyeddl.eddl.download_vgg19(top=True, input_shape=[])

Download a VGG19 model pretrained with imagenet.

Parameters
  • top – If True, remove the densely connected part from the model and rename the last layer as “top”.

  • input_shape – new input shape for the model (do not specify the batch dimension)

Returns

a VGG19 model

pyeddl.eddl.download_vgg19_bn(top=True, input_shape=[])

Download a VGG19 model with BatchNormalization pretrained with imagenet.

Parameters
  • top – If True, remove the densely connected part from the model and rename the last layer as “top”.

  • input_shape – new input shape for the model (do not specify the batch dimension)

Returns

a VGG19 model with BatchNormalization

pyeddl.eddl.download_resnet18(top=True, input_shape=[])

Download a ResNet18 model pretrained with imagenet.

Parameters
  • top – If True, remove the densely connected part from the model and rename the last layer as “top”.

  • input_shape – new input shape for the model (do not specify the batch dimension)

Returns

a ResNet18 model

pyeddl.eddl.download_resnet34(top=True, input_shape=[])

Download a ResNet34 model pretrained with imagenet.

Parameters
  • top – If True, remove the densely connected part from the model and rename the last layer as “top”.

  • input_shape – new input shape for the model (do not specify the batch dimension)

Returns

a ResNet34 model

pyeddl.eddl.download_resnet50(top=True, input_shape=[])

Download a ResNet50 model pretrained with imagenet.

Parameters
  • top – If True, remove the densely connected part from the model and rename the last layer as “top”.

  • input_shape – new input shape for the model (do not specify the batch dimension)

Returns

a ResNet50 model

pyeddl.eddl.download_resnet101(top=True, input_shape=[])

Download a ResNet101 model pretrained with imagenet.

Parameters
  • top – If True, remove the densely connected part from the model and rename the last layer as “top”.

  • input_shape – new input shape for the model (do not specify the batch dimension)

Returns

a ResNet101 model

pyeddl.eddl.download_resnet152(top=True, input_shape=[])

Download a ResNet152 model pretrained with imagenet.

Parameters
  • top – If True, remove the densely connected part from the model and rename the last layer as “top”.

  • input_shape – new input shape for the model (do not specify the batch dimension)

Returns

a ResNet152 model

pyeddl.eddl.download_densenet121(top=True, input_shape=[])

Download a DenseNet121 model pretrained with imagenet.

Parameters
  • top – If True, remove the densely connected part from the model and rename the last layer as “top”.

  • input_shape – new input shape for the model (do not specify the batch dimension)

Returns

a DenseNet121 model

Datasets

pyeddl.eddl.exist(name)
pyeddl.eddl.download_mnist()

Download the MNIST dataset.

See: http://yann.lecun.com/exdb/mnist/

pyeddl.eddl.download_cifar10()

Download the CIFAR-10 Dataset.

See: https://www.cs.toronto.edu/~kriz/cifar.html

pyeddl.eddl.download_drive()

Download the DRIVE Dataset.

See: https://drive.grand-challenge.org/

pyeddl.eddl.download_imdb_2000()

Download the IMDB Dataset, 2000 most frequent words.

See: https://ai.stanford.edu/~amaas/data/sentiment/

pyeddl.eddl.download_eutrans()

Download the EuTrans Dataset.

pyeddl.eddl.download_flickr()

Download the Flickr Dataset (small partition).

Accelerators

pyeddl.eddl.download_hlsinf(version, subversion)

Download the HLSinf accelerator.

Parameters
  • version – accelerator version

  • subversion – accelerator subversion

Returns

None

ONNX support

pyeddl.eddl.save_net_to_onnx_file(net, path, seq_len=0)
pyeddl.eddl.import_net_from_onnx_file(path, input_shape=None, mem=0, log_level=LOG_LEVEL.INFO)

Import ONNX Net from file.

If input_shape is specified, also change the net’s input shape (works only for models with one input layer).

Parameters
  • path – path to the file where the net is saved

  • input_shape – shape of the input data (without the batch dimension)

  • mem – device

  • log_level – a LOG_LEVEL value

Returns

Net

pyeddl.eddl.serialize_net_to_onnx_string(net, gradients)
pyeddl.eddl.import_net_from_onnx_string(model_string, mem=0)