pyecvl.ecvl
— ecvl API
Data types
- class pyecvl.ecvl.DataType(self: pyecvl._core.ecvl.DataType, arg0: int)
Enum class which defines data types allowed for images.
- float32 = DataType.float32
- float64 = DataType.float64
- int16 = DataType.int16
- int32 = DataType.int32
- int64 = DataType.int64
- int8 = DataType.int8
- none = DataType.none
- uint16 = DataType.uint16
- uint8 = DataType.uint8
- pyecvl.ecvl.DataTypeSize(dt=None)
Get the size in bytes of a given DataType.
With no arguments, get the number of existing DataType members.
- Parameters
dt – a DataType.
- Returns
the DataType size in bytes, or the number of existing DataType members if called with no arguments
- pyecvl.ecvl.DataTypeSignedSize()
Get the number of existing signed DataType members.
- Returns
the number of existing signed DataType members
Color types
Device types
Image
- class pyecvl.ecvl.Image(dims, elemtype, channels, colortype, spacings=None, dev=Device.CPU, meta=None)
Image class.
- Variables
elemtype_ – image pixel type, a DataType
elemsize_ – image pixel size in bytes
channels_ – a string describing the image planes layout. Each character provides information on the corresponding channel. The possible values are: ‘x’: horizontal spatial dimension; ‘y’: vertical spatial dimension; ‘z’: depth spatial dimension; ‘c’: color dimension; ‘t’: temporal dimension; ‘o’: any other dimension. For example, “xyc” describes a 2-dimensional image structured in color planes. This could be for example a ColorType.GRAY Image with dims_[2] = 1 or a ColorType.RGB Image with dims_[2] = 3. The ColorType constrains the value of the dimension corresponding to the color channel. Another example is “cxy” with dims_[0] = 3 and ColorType.BGR. In this case the color dimension is the one which changes faster, as it happens in libraries such as OpenCV.
colortype_ – image ColorType. If different from ColorType.none, the channels_ string must contain a ‘c’ and the corresponding dimension must have the appropriate value.
spacings_ – space in mm between consecutive pixels/voxels on each axis (list of floats).
datasize_ – size of image data in bytes.
contiguous_ – whether the image is stored contiguously in memory
meta_ – image metadata
dev_ – image Device
- Dims_
image dimensions in pixels/voxels (list of integers)
- Strides_
the number of bytes the internal data pointer has to skip to get to the next pixel/voxel in the corresponding dimension
- Parameters
dims – image dimensions
elemtype – pixel type, a DataType
channels – channels string
colortype – a ColorType
spacings – spacings between pixels
dev – image Device
- Add(other, saturate=True)
Add data from another image to this image’s data.
- Parameters
other – other image
saturate – in case of overflow, set values to limit for data type
- Returns
None
- Channels()
Get the number of channels.
- Returns
number of channels
- ConvertTo(dtype, saturate=True)
Convert Image to another DataType.
- Parameters
dtype – target DataType
saturate – whether to apply saturation or not
- Div(other, saturate=True)
Divide data from this image by another image’s data.
- Parameters
other – other image
saturate – in case of overflow, set values to limit for data type
- Returns
None
- GetMeta(key)
Get the metadata value corresponding to
key
.- Parameters
key – key string
- Height()
Get the image height.
- Returns
image height
- IsEmpty()
Whether the image contains data or not.
- Returns
True if the image contains data, False otherwise
- IsOwner()
Whether the image owns the data or not.
- Returns
True if the image owns the data, False otherwise
- Mul(other, saturate=True)
Multiply data from this image by another image’s data.
- Parameters
other – other image
saturate – in case of overflow, set values to limit for data type
- Returns
None
- SetMeta(key, value)
Set the metadata value corresponding to
key
.- Parameters
key – key string
value – metadata value (integer, float or string)
- Returns
True if a new entry has been inserted, else False
- Sub(other, saturate=True)
Subtract data from another image from this image’s data.
- Parameters
other – other image
saturate – in case of overflow, set values to limit for data type
- Returns
None
- To(dev)
Change the image device.
- Parameters
dev – new Device
- Returns
None
- Width()
Get the image width.
- Returns
image width
- copy()
Create a deep copy of this image.
- Returns
a copy of this image
- static empty()
Create an empty image.
- Returns
an empty image
- static fromarray(array, channels, colortype, spacings=None)
Create an image from a NumPy array.
- Parameters
array – a NumPy array
channels – channels string
colortype – a ColorType
spacings – spacings between pixels
- Returns
an image containing the same data as the input array
- pyecvl.ecvl.CopyImage(src, dst, new_type=None)
Copy data from the source image
src
to the destination imagedst
.src
anddst
cannot be the same image.src
cannot be an image with DataType.none. The optionalnew_type
parameter can be used to change the DataType of thedst
image.When the DataType is not specified:
if
dst
is empty,src
will be directly copied intodst
if
src
anddst
have different size in memory or different channels anddst
is the data owner, the procedure will overwritedst
creating a new image (channels and dimensions will be the same assrc
, DataType will be the same asdst
if they are not none or the same assrc
otherwise)if
src
anddst
have different size in memory or different channels anddst
is not the data owner, the procedure will throw an exceptionif
src
anddst
have different color types anddst
is the data owner, the procedure produces adst
image with the same color type assrc
if
src
anddst
have different color types anddst
is not the data owner, the procedure will throw an exception
When the DataType is specified the function has the same behavior, but
dst
will have the specified DataType.- Parameters
src – source image
dst – destination image
new_type – new DataType for the destination image
- Returns
None
- pyecvl.ecvl.ShallowCopyImage(src, dst)
Shallow copy of
src
todst
(dst
will point to the same data).src
anddst
cannot be the same image. Even thoughdst
will point to the same data assrc
, the latter will be the data owner.- Parameters
src – source image
dst – destination image
- Returns
None
- pyecvl.ecvl.RearrangeChannels(src, dst, channels, new_type=None)
Change image dimensions order.
Changes the order of the
src
image dimensions, saving the result into thedst
image. The new order can be specified as a string through thechannels
parameter.src
anddst
can be the same image.- Parameters
src – source image
dst – destination image
channels – new order for the image channels, as a string
new_type – new DataType for the destination image. If None, the destination image will preserve its type if it is not empty, otherwise it will have the same type as the source image
- pyecvl.ecvl.ConvertTo(src, dst, dtype, saturate=True)
Convert Image to another DataType.
- Parameters
src – source image
dst – destination image
dtype – target DataType
saturate – whether to apply saturation or not
Arithmetic operations
- pyecvl.ecvl.Neg(src, dst, dst_type=DataType.none, saturate=True)
In-place negation of an image.
Negates every value of
src
, and stores the the result indst
with the specified type.- Parameters
src – source image
dst – destination image
dst_type – destination image DataType
saturate – whether to apply saturation
- Returns
None
- pyecvl.ecvl.Add(src1, src2, dst, dst_type=DataType.none, saturate=True)
Add two images.
Adds
src1
tosrc2
and stores the the result indst
with the specified type.- Parameters
src1 – source image 1
src2 – source image 2
dst – destination image
dst_type – destination image DataType
saturate – whether to apply saturation
- Returns
None
- pyecvl.ecvl.Sub(src1, src2, dst, dst_type=DataType.none, saturate=True)
Subtract an image from another.
Subtracts
src2
fromsrc1
and stores the the result indst
with the specified type.- Parameters
src1 – source image 1
src2 – source image 2
dst – destination image
dst_type – destination image DataType
saturate – whether to apply saturation
- Returns
None
- pyecvl.ecvl.Mul(src1, src2, dst, dst_type=DataType.none, saturate=True)
Multiply two images.
Muliplies
src1
bysrc2
and stores the the result indst
with the specified type.- Parameters
src1 – source image 1
src2 – source image 2
dst – destination image
dst_type – destination image DataType
saturate – whether to apply saturation
- Returns
None
- pyecvl.ecvl.Div(src1, src2, dst, dst_type=DataType.none, saturate=True)
Divide an image by another.
Divides
src1
bysrc2
and stores the the result indst
with the specified type.- Parameters
src1 – source image 1
src2 – source image 2
dst – destination image
dst_type – destination image DataType
saturate – whether to apply saturation
- Returns
None
Image processing
- class pyecvl.ecvl.ThresholdingType(self: pyecvl._core.ecvl.ThresholdingType, arg0: int)
Enum class representing the possible thresholding types.
- BINARY = ThresholdingType.BINARY
- BINARY_INV = ThresholdingType.BINARY_INV
- class pyecvl.ecvl.InterpolationType(self: pyecvl._core.ecvl.InterpolationType, arg0: int)
Enum class representing the possible interpolation types.
- area = InterpolationType.area
- cubic = InterpolationType.cubic
- lanczos4 = InterpolationType.lanczos4
- linear = InterpolationType.linear
- nearest = InterpolationType.nearest
- class pyecvl.ecvl.MorphType(self: pyecvl._core.ecvl.MorphType, arg0: int)
Enum class representing the possible morph types.
- MORPH_BLACKHAT = MorphType.MORPH_BLACKHAT
- MORPH_CLOSE = MorphType.MORPH_CLOSE
- MORPH_DILATE = MorphType.MORPH_DILATE
- MORPH_ERODE = MorphType.MORPH_ERODE
- MORPH_GRADIENT = MorphType.MORPH_GRADIENT
- MORPH_HITMISS = MorphType.MORPH_HITMISS
- MORPH_OPEN = MorphType.MORPH_OPEN
- MORPH_TOPHAT = MorphType.MORPH_TOPHAT
- class pyecvl.ecvl.InpaintType(self: pyecvl._core.ecvl.InpaintType, arg0: int)
Enum class representing the possible inpaint types.
- INPAINT_NS = InpaintType.INPAINT_NS
- INPAINT_TELEA = InpaintType.INPAINT_TELEA
- class pyecvl.ecvl.BorderType(self: pyecvl._core.ecvl.BorderType, arg0: int)
Enum class representing the possible border types.
- BORDER_CONSTANT = BorderType.BORDER_CONSTANT
- BORDER_REFLECT = BorderType.BORDER_REFLECT
- BORDER_REFLECT_101 = BorderType.BORDER_REFLECT_101
- BORDER_REPLICATE = BorderType.BORDER_REPLICATE
- BORDER_TRANSPARENT = BorderType.BORDER_TRANSPARENT
- BORDER_WRAP = BorderType.BORDER_WRAP
- pyecvl.ecvl.ResizeDim(src, dst, newdims, interp=InterpolationType.linear)
Resize an image to the specified dimensions.
Resizes
src
and outputs the result indst
.- Parameters
src – source image
dst – destination image
newdims – list of integers specifying the new size of each dimension, optionally including the depth:
[new_width, new_height]
or[new_width, new_height, new_depth]
interp – InterpolationType to be used
- Returns
None
- pyecvl.ecvl.ResizeScale(src, dst, scales, interp=InterpolationType.linear)
Resize an image by scaling the dimensions by a given scale factor.
Resizes
src
and outputs the result indst
.- Parameters
src – source image
dst – destination image
scales – list of floats that specifies the scale to apply to each dimension. The length of the list must match the
src
image dimensions, excluding the color channelinterp – InterpolationType to be used
- Returns
None
- pyecvl.ecvl.Flip2D(src, dst)
Flip an image vertically.
- Parameters
src – source image
dst – destination image
- Returns
None
- pyecvl.ecvl.Mirror2D(src, dst)
Flip an image horizontally.
- Parameters
src – source image
dst – destination image
- Returns
None
- pyecvl.ecvl.Rotate2D(src, dst, angle, center=None, scale=1.0, interp=InterpolationType.linear)
Rotate an image without changing its dimensions.
Rotates an image clockwise by a given angle (in degrees), with respect to a given center. The values of unknown pixels in the output image are set to 0. The output image is guaranteed to have the same dimensions as the input one. An optional scale parameter can be provided: this won’t change the output image size, but the image will be scaled during rotation.
- Parameters
src – source image
dst – destination image
angle – the rotation angle in degrees
center – a list of floats representing the coordinates of the rotation center. If None, the center of the image is used
scale – scaling factor
interp – InterpolationType to be used
- Returns
None
- pyecvl.ecvl.RotateFullImage2D(src, dst, angle, scale=1.0, interp=InterpolationType.linear)
Rotate an image resizing the output to fit all the pixels.
Rotates an image clockwise by a given angle (in degrees). The values of unknown pixels in the output image are set to 0. The output Image is guaranteed to contain all the pixels of the rotated image. Thus, its dimensions can be different from those of the input one. An optional scale parameter can be provided: if set, the image will also be scaled.
- Parameters
src – source image
dst – destination image
angle – the rotation angle in degrees
scale – scaling factor
interp – InterpolationType to be used
- Returns
None
- pyecvl.ecvl.ChangeColorSpace(src, dst, new_type)
Copy the
src
image into thedst
image changing the color space.Source and destination can be the same image.
- Parameters
src – source image
dst – destination image
new_type – a ColorType specifying the new color space
- Returns
None
- pyecvl.ecvl.Threshold(src, dst, thresh, maxval, thresh_type=ThresholdingType.BINARY)
Apply a fixed threshold to an image.
This function can be used to get a binary image out of a grayscale (ColorType.GRAY) image or to remove noise, filtering out pixels with too small or too large values. Pixels up to the
thresh
value will be set to 0, others will be set tomaxval
ifthresh_type
is ThresholdingType.BINARY. The opposite will happen ifthresh_type
is set to ThresholdingType.BINARY_INV.- Parameters
src – source image
dst – destination image
thresh – threshold value
maxval – maximum values in the thresholded image
thresh_type – ThresholdingType to be applied
- Returns
None
- pyecvl.ecvl.MultiThreshold(src, dst, thresholds, minval=0, maxval=255)
Apply multiple thresholds to an image.
The resulting image is quantized based on the provided
thresholds
values. Output values will range uniformly fromminval
tomaxval
.- Parameters
src – source image
dst – destination image
thresh – threshold values
maxval – minimum value in the output image
maxval – maximum value in the output image
- Returns
None
- pyecvl.ecvl.OtsuThreshold(src)
Calculate the Otsu thresholding value.
The image must be ColorType.GRAY.
- Parameters
src – source image
- Returns
Otsu threshold value
- pyecvl.ecvl.OtsuMultiThreshold(src, n_thresholds=2)
Calculate the Otsu thresholding values.
The image must be ColorType.GRAY. The number of thresholds to be found is defined by the n_thresholds parameter (default is 2).
- Parameters
src – source image
- Returns
Otsu threshold values (list of integers)
- pyecvl.ecvl.Filter2D(src, dst, ker, type=DataType.none)
Convolve an image with a kernel.
- Parameters
src – source image
dst – destination image
ker – convolution kernel
type – destination DataType. If set to DataType.none, the DataType of
src
is used
- Returns
None
- pyecvl.ecvl.SeparableFilter2D(src, dst, kerX, kerY, type=DataType.none)
Convolve an image with a couple of 1-dimensional kernels.
- Parameters
src – source image
dst – destination image
kerX – convolution kernel for the X axis.
kerY – convolution kernel for the Y axis.
type – destination DataType. If set to DataType.none, the DataType of
src
is used
- Returns
None
- pyecvl.ecvl.GaussianBlur(src, dst, sizeX, sizeY, sigmaX, sigmaY=0)
Blurs an image using a Gaussian kernel.
- Parameters
src – source image
dst – destination image
sizeX – horizontal size of the kernel. Must be positive and odd
sizeY – vertical size of the kernel. Must be positive and odd
sigmaX – Gaussian kernel standard deviation in the X direction.
sigmaY – Gaussian kernel standard deviation in the Y direction. If zero, sigmaX is used. If both are zero, they are calculated from sizeX and sizeY.
- Returns
None
- pyecvl.ecvl.GaussianBlur2(src, dst, sigma)
- pyecvl.ecvl.AdditiveLaplaceNoise(src, dst, std_dev)
Adds Laplace distributed noise to an image.
- Parameters
src – source image
dst – destination image
std_dev – standard deviation of the noise-generating distribution. Suggested values are around 255 * 0.05 for uint8 images
- Returns
None
- pyecvl.ecvl.AdditivePoissonNoise(src, dst, lambda_)
Adds Poisson distributed noise to an image.
- Parameters
src – source image
dst – destination image
lambda_ – lambda parameter of the Poisson distribution
- Returns
None
- pyecvl.ecvl.GammaContrast(src, dst, gamma)
Adjust contrast by scaling each pixel value X to 255 * ((X/255) ** gamma).
- Parameters
src – source image
dst – destination image
gamma – exponent for the contrast adjustment
- Returns
None
- pyecvl.ecvl.CoarseDropout(src, dst, p, drop_size, per_channel)
Set rectangular areas within an image to zero.
- Parameters
src – source image
dst – destination image
p – probability of any rectangle being set to zero
drop_size – size of rectangles in percentage of the input image
per_channel – whether to use the same value for all channels
- Returns
None
- pyecvl.ecvl.IntegralImage(src, dst, dst_type=DataType.float64)
Calculate the integral image of the source image.
The
src
image must be ColorType.GRAY, “xyc” and DataType.uint8.- Parameters
src – source image
dst – destination image
dst_type – DataType of the destination image
- Returns
None
- pyecvl.ecvl.NonMaximaSuppression(src, dst)
Calculate the non-maxima suppression of the source image.
The
src
image must be ColorType.GRAY, “xyc” and DataType.int32.- Parameters
src – source image
dst – destination image
- Returns
None
- pyecvl.ecvl.GetMaxN(src, n)
Get the indices of the
n
maximum values of an image.The
src
image must be ColorType.GRAY, “xyc” and DataType.int32.- Parameters
src – source image
n – how many values to return
- Returns
list of pairs corresponding to the coordinates of the max values
- pyecvl.ecvl.ConnectedComponentsLabeling(src, dst)
Label connected components in the input image.
The
src
image must be “xyc”, only one color channel and DataType.uint8.- Parameters
src – source image
dst – destination image
- Returns
number of different objects, including the background
- pyecvl.ecvl.FindContours(src)
Find contours in the input image.
The
src
image must be “xyc”, only one color channel and DataType.uint8.- Parameters
src – source image
- Returns
image contours
- pyecvl.ecvl.Stack(src, dst)
Stack a sequence of images along a new depth dimension.
Images must be “xyc” and their dimensions must match.
- Parameters
src – list of source images
dst – destination image
- Returns
None
- pyecvl.ecvl.HConcat(src, dst)
Concatenate images horizontally.
Images must be “xyc” and have the same number of rows.
- Parameters
src – list of source images
dst – destination image
- Returns
None
- pyecvl.ecvl.VConcat(src, dst)
Concatenate images vertically.
Images must be “xyc” and have the same number of columns.
- Parameters
src – list of source images
dst – destination image
- Returns
None
- pyecvl.ecvl.Morphology(src, dst, op, kernel, anchor=None, iterations=1, border_type=BorderType.BORDER_CONSTANT, border_value=0)
Perform morphological transformations based on erosion and dilation.
- Parameters
src – input image
dst – output image
op – a MorphType
kernel – structuring element
anchor – anchor position within the kernel. A negative value means that the anchor is at the center of the kernel
iterations – number of times erosion and dilation are applied
border_type – pixel extrapolation method, see BorderType. BorderType.BORDER_WRAP is not supported
borderValue – border value in case of a constant border
- Returns
None
- pyecvl.ecvl.Inpaint(src, dst, inpaintMask, inpaintRadius, flag=InpaintType.INPAINT_TELEA)
Restore a region in an image using the region’s neighborhood.
- Parameters
src – input image
dst – output image
inpaintMask – an Image with 1 channel and DataType.uint8. Non-zero pixels indicate the area that needs to be inpainted.
inpaintRadius – radius of a circular neighborhood of each point inpainted that is considered by the algorithm.
flag – inpainting method (an InpaintType)
- Returns
None
- pyecvl.ecvl.MeanStdDev(src)
Calculate the mean and the standard deviation of an image.
- Parameters
src – input image
- Returns
a (mean, stddev) tuple
- pyecvl.ecvl.Transpose(src, dst)
Swap rows and columns of an image.
- Parameters
src – input image
dst – output image
- Returns
None
- pyecvl.ecvl.GridDistortion(src, dst, num_steps=5, distort_limit=None, interp=InterpolationType.linear, border_type=BorderType.BORDER_REFLECT_101, border_value=0, seed=None)
Divide the image into a cell grid and randomly stretch or reduce each cell.
- Parameters
src – input image
dst – output image
num_steps – grid cell count on each side
distort_limit – distortion steps range
interp – InterpolationType to be used
border_type – pixel extrapolation method, see BorderType
border_value – padding value if border_type is BorderType.BORDER_CONSTANT
seed – seed for the random number generator
- Returns
None
- pyecvl.ecvl.ElasticTransform(src, dst, alpha=34, sigma=4, interp=InterpolationType.linear, border_type=BorderType.BORDER_REFLECT_101, border_value=0, seed=None)
Elastic deformation of input image.
- Parameters
src – input image
dst – output image
alpha – scaling factor that controls the intensity of the deformation
sigma – Gaussian kernel standard deviation
interp – InterpolationType to be used. If
src
is DataType.int8 or DataType.int32, InterpolationType.nearest is usedborder_type – pixel extrapolation method, see BorderType
border_value – padding value if border_type is BorderType.BORDER_CONSTANT
seed – seed for the random number generator
- Returns
None
- pyecvl.ecvl.OpticalDistortion(src, dst, distort_limit=None, shift_limit=None, interp=InterpolationType.linear, border_type=BorderType.BORDER_REFLECT_101, border_value=0, seed=None)
Barrel / pincushion distortion.
- Parameters
src – input image
dst – output image
distort_limit – distortion intensity range
shift_limit – image shifting range
interp – InterpolationType to be used
border_type – pixel extrapolation method, see BorderType
border_value – padding value if border_type is BorderType.BORDER_CONSTANT
seed – seed for the random number generator
- Returns
None
- pyecvl.ecvl.Salt(src, dst, p, per_channel=False, seed=None)
Add salt noise (white pixels) to the input image.
- Parameters
src – input image
dst – output image
p – probability of replacing a pixel with salt noise
per_channel – if True, apply channel-wide noise
seed – seed for the random number generator
- Returns
None
- pyecvl.ecvl.Pepper(src, dst, p, per_channel=False, seed=None)
Add pepper noise (black pixels) to the input image.
- Parameters
src – input image
dst – output image
p – probability of replacing a pixel with pepper noise
per_channel – if True, apply channel-wide noise
seed – seed for the random number generator
- Returns
None
- pyecvl.ecvl.SaltAndPepper(src, dst, p, per_channel=False, seed=None)
Add salt and pepper noise (white and black pixels) to the input image.
White and black pixels are equally likely.
- Parameters
src – input image
dst – output image
p – probability of replacing a pixel with salt or pepper noise
per_channel – if True, apply channel-wide noise
seed – seed for the random number generator
- Returns
None
- pyecvl.ecvl.SliceTimingCorrection(src, dst, odd=False, down=False)
Correct each voxel’s time-series.
Slice timing correction works by using (Hanning-windowed) sinc interpolation to shift each time-series by an appropriate fraction of a TR relative to the middle of the TR period. The default slice order acquisition is from the bottom of the brain to the top.
- Parameters
src – input image. Channels must be
"xyzt"
and the image must have spacings (distance between consecutive voxels on each dimension)dst – output image
odd – True if odd slices were acquired with interleaved order
(0, 2, 4, ..., 1, 3, 5, ...)
down – True if down slices were acquired from the top of the brain to the bottom
- pyecvl.ecvl.Moments(src, dst, order=3, type_=DataType.float64)
Calculate raw image moments of the source image up to the specified order.
Moments are stored in the output image in the same order as for source channels. The output image will be on the same device as the source image.
- Parameters
src – input image. It must be a grayscale (ColorType.GRAY) or a data (ColorType.none) image.
dst – output image (ColorType.none) containing the computed raw image moments. The size of the Image will be (order + 1, order + 1)
order – moments order
type – data type for the output image
- pyecvl.ecvl.CentralMoments(src, moments, center, order=3, type_=DataType.float64)
Calculate central moments of the source image up to the specified order.
- Parameters
src – input image. It must be a grayscale (ColorType.GRAY) or a data (ColorType.none) image.
moments – output data (ColorType.none) image containing the computed moments. The size of the Image will be (order + 1, order + 1)
center – center coordinates (list of floats)
len(center)
andsrc.dims_
must match. The source axes order must be the same used to specify the center coordinates.order – moments order
type – data type for the output image
- pyecvl.ecvl.DrawEllipse(src, center, axes, angle, color, thickness=1)
Draw an ellipse over the specified Image.
- Parameters
src – input (and output) image.
center – center of the ellipse
axes – ellipse axes half size
angle – rotation angle of the ellipse
color – ellipse color, e.g.,
[255]
,[5, 5, 5]
(RGB)thickness – ellipse border thickness. If negative all the pixels of the ellipse will be filled with the specified color value.
- pyecvl.ecvl.DropColorChannel(src)
Remove color channel from the input image.
Remove the color channel (“c”) from the specified input image, modifying all other attributes accordingly. This function can only be applied to images with
ColorType.GRAY
, i.e., having the color channel dimension equal to 1.- Parameters
src – input image.
- pyecvl.ecvl.Normalize(src, dst, mean, std)
Normalize the input image with mean and standard deviation.
For each pixel, subtract mean and divide by std. Useful to normalize a dataset, getting the data within a range.
For xyc input images,
mean
andstd
can be lists of floats, representing separate mean and standard deviation for each color channel.- Parameters
src – input image.
dst – output image.
mean – mean to use for normalization.
std – standard deviation to use for normalization.
- pyecvl.ecvl.CenterCrop(src, dst, size)
Crop the given image at the center.
- Parameters
src – input image.
dst – output image.
size – list of integers [w, h] specifying the desired output size
- pyecvl.ecvl.ScaleTo(src, dst, new_min, new_max)
Linearly scale an Image to a new range.
- Parameters
src – input image.
dst – output image.
new_min – new minimum value
new_max – new maximum value
- pyecvl.ecvl.ScaleFromTo(src, dst, old_min, old_max, new_min, new_max)
Linearly scale an Image to a new range.
- Parameters
src – input image.
dst – output image.
old_min – old minimum value
old_max – old maximum value
new_min – new minimum value
new_max – new maximum value
- pyecvl.ecvl.Pad(src, dst, padding, border_type=BorderType.BORDER_CONSTANT, border_value=0)
Pad an Image.
Add a border to the four sides of the image. It can be specified equal for all the sides, equal for top and bottom and for left and right or different for all the sides.
- Parameters
src – input image
dst – output image
padding – list of integers representing the border sizes. It can have one element (same padding for all sides), two elements (top/bottom, left/right) or four (top, bottom, left, right)
border_type – a BorderType
border_value – pixel value for the border if
border_type
isBORDER_CONSTANT
- pyecvl.ecvl.RandomCrop(src, dst, size, pad_if_needed=False, border_type=BorderType.BORDER_CONSTANT, border_value=0, seed=None)
Crop the source Image to the specified size at a random location.
- Parameters
src – input image.
dst – output image.
size – list of intergers representing the desired (width, height) of the output Image.
pad_if_needed – if the desired size is bigger than the src image and
pad_if_needed
is True, pad the image; otherwise, throw an exception.border_type – BorderType to use if
`pad_if_needed
is True.border_value – pixel value for the border if
border_type
isBORDER_CONSTANT
andpad_if_needed
is True.
Metadata
DeepHealth dataset parser
- class pyecvl.ecvl.SplitType(self: pyecvl._core.ecvl.SplitType, arg0: int)
Enum class representing the supported split types.
- test = SplitType.test
- training = SplitType.training
- validation = SplitType.validation
- class pyecvl.ecvl.Task(self: pyecvl._core.ecvl.Task, arg0: int)
Enum class representing allowed tasks for a dataset.
- classification = Task.classification
- segmentation = Task.segmentation
- class pyecvl.ecvl.Sample(*args, **kwargs)
Sample image in a dataset.
Provides the information to describe a dataset sample.
- Variables
location_ – absolute path(s) of the sample (list of strings)
label_path_ – absolute path of the sample’s ground truth
label_ – sample labels (list of integers)
values_ – feature index-to-value mapping
size_ – original x and y dimensions of the sample
Overloaded function.
__init__(self: pyecvl._core.ecvl.Sample, arg0: pyecvl._core.ecvl.Sample) -> None
__init__(self: pyecvl._core.ecvl.Sample) -> None
- LoadImage(ctype=ColorType.RGB, is_gt=False)
Return the dataset image for this sample.
Opens the sample image from
location_
orlabel_path_
, depending on theis_gt
parameter.- Parameters
ctype – ColorType of the returned image
is_gt – whether to load the sample image or its ground truth
- Returns
sample image
- class pyecvl.ecvl.Split(split_name='', samples_indices=None, drop_last=False, no_label=False)
Represents a subset of a dataset.
- Variables
split_name_ – split name (string)
split_type_ – split type (SplitType), if the split name is “training”, “validation” or “test”
samples_indices_ – sample indices of the split (list of integers)
drop_last_ – whether to drop elements that don’t fit in the batch (boolean)
num_batches_ – number of batches in this split (integer)
last_batch_ – dimension of the last batch (integer)
no_label_ – whether the split has samples with labels (boolean)
- Parameters
split_name – name of the split
samples_indices – indices of samples within the split
drop_last – whether to drop elements that don’t fit in the batch
no_label – whether the split has samples with labels
- SetLastBatch(batch_size)
Set the size of the last batch for the given batch size.
- Parameters
batch_size – number of samples for each batch (except the last one, if drop_last is False and there is a remainder)
- SetNumBatches(batch_size)
Set the number of batches for the given batch size.
- Parameters
batch_size – number of samples for each batch (except the last one, if drop_last is False and there is a remainder)
- class pyecvl.ecvl.Dataset(filename)
DeepHealth Dataset.
Implements the DeepHealth Dataset Format.
- Variables
name_ – dataset name
description_ – dataset description
classes_ – classes available in the dataset (list of strings)
features_ – features available in the dataset (list of strings)
samples_ – list of dataset samples
split_ – dataset splits
current_split_ – current split from which images are loaded
task_ – dataset task (classification or segmentation)
- Parameters
filename – path to the dataset file
- Dump(path)
Dump the dataset to YAML following the DeepHealth Dataset Format.
The YAML file is saved into the dataset’s root directory. Sample paths are relative to the dataset’s root directory.
- Parameters
path – output file path
- Returns
None
- GetLocations()
Get the locations of all samples in the dataset.
Note that a single sample can have multiple locations (e.g., different acquisitions of the same image).
- Returns
a list of lists of image paths
- GetSplit(split=- 1)
Get the image indices of the specified split.
By default, return the image indices of the current split.
- Parameters
split – index, name or
SplitType
of the split to get- Returns
list of integers representing the image indices
- SetSplit(split)
Set the current split.
- Parameters
split – index, name or
SplitType
of the split to set
Augmentations
- class pyecvl.ecvl.AugmentationParam(*args, **kwargs)
Augmentation parameters which must be randomly generated within a range.
- Variables
min_ – minimum value for the random range
max_ – maximum value for the random range
value_ – generated parameter value
Overloaded function.
__init__(self: pyecvl._core.ecvl.AugmentationParam) -> None
__init__(self: pyecvl._core.ecvl.AugmentationParam, min: float, max: float) -> None
- GenerateValue()
Generate the random value between min_ and max_.
- static SetSeed(seed)
Set a fixed seed for the random generated values.
Useful to reproduce experiments with same augmentations.
- Parameters
seed – seed value
- Returns
None
- class pyecvl.ecvl.AugmentationFactory
Creates augmentations from text strings.
If only one argument is supplied, it needs to include the augmentation’s name, e.g.:
AugmentationFactory.create('AugFlip p=0.5')
If only two arguments are supplied, the first is the augmentation’s name, e.g.:
AugmentationFactory.create('AugFlip', 'p=0.5')
- static create(*args, **kwargs)
Overloaded function.
create(is: str) -> pyecvl._core.ecvl.Augmentation
create(name: str, is: str) -> pyecvl._core.ecvl.Augmentation
- class pyecvl.ecvl.SequentialAugmentationContainer(augs)
A container for multiple augmentations to be applied in sequence.
- Parameters
augs – list of augmentations to be applied
- static fromtext(txt)
Create a SequentialAugmentationContainer from a text description, e.g.:
txt = '''\ AugFlip p=0.2 AugMirror p=0.4 end ''' c = SequentialAugmentationContainer(txt)
- class pyecvl.ecvl.OneOfAugmentationContainer(p, augs)
A container for multiple augmentations, from which one is randomly chosen.
The chosen augmentation will be applied with a user-specified probability.
- Parameters
p – probability of applying the chosen augmentation
augs – list of augmentations
- static fromtext(txt)
Create a OneOfAugmentationContainer from a text description, e.g.:
txt = '''\ p=0.7 AugFlip p=0.2 AugMirror p=0.4 end ''' c = OneOfAugmentationContainer(txt)
- class pyecvl.ecvl.AugRotate(angle, center=None, scale=1.0, interp=InterpolationType.linear, gt_interp=InterpolationType.nearest)
Augmentation wrapper for Rotate2D.
- Parameters
angle – range of degrees
[min, max]
to randomly select fromcenter – a list of floats representing the coordinates of the rotation center. If None, the center of the image is used
scale – scaling factor
interp – InterpolationType to be used
gt_interp – InterpolationType to be used for ground truth
- static fromtext(txt)
Create an AugRotate from a text description, e.g.:
a = AugRotate('angle=[30, 50] center=(2, 3) ' 'scale=1.1 interp="nearest"')
- class pyecvl.ecvl.AugResizeDim(dims, interp=InterpolationType.linear, gt_interp=InterpolationType.nearest)
Augmentation wrapper for ResizeDim.
- Parameters
dims – list of integers that specifies the new size of each dimension
interp – InterpolationType to be used
gt_interp – InterpolationType to be used for ground truth
- static fromtext(txt)
Create an AugResizeDim from a text description, e.g.:
a = AugResizeDim('dims=(4, 3) interp="linear"')
- class pyecvl.ecvl.AugResizeScale(scale, interp=InterpolationType.linear, gt_interp=InterpolationType.nearest)
Augmentation wrapper for ResizeScale.
- Parameters
scale – list of floats that specifies the scale to apply to each dimension
interp – InterpolationType to be used
gt_interp – InterpolationType to be used for ground truth
- static fromtext(txt)
Create an AugResizeScale from a text description, e.g.:
a = AugResizeScale('scale=(0.5, 0.5) interp="linear"')
- class pyecvl.ecvl.AugFlip(p=0.5)
Augmentation wrapper for Flip2D.
- Parameters
p – probability of each image to get flipped
- static fromtext(txt)
Create an AugFlip from a text description, e.g.:
a = AugFlip('p=0.5')
- class pyecvl.ecvl.AugMirror(p=0.5)
Augmentation wrapper for Mirror2D.
- Parameters
p – probability of each image to get mirrored
- static fromtext(txt)
Create an AugMirror from a text description, e.g.:
a = AugMirror('p=0.5')
- class pyecvl.ecvl.AugGaussianBlur(sigma)
Augmentation wrapper for GaussianBlur.
- Parameters
sigma – sigma range
[min, max]
to randomly select from.
- static fromtext(txt)
Create an AugGaussianBlur from a text description, e.g.:
a = AugGaussianBlur('sigma=[0.2, 0.4]')
- class pyecvl.ecvl.AugAdditiveLaplaceNoise(std_dev)
Augmentation wrapper for AdditiveLaplaceNoise.
- Parameters
std_dev – range of values
[min, max]
to randomly select the standard deviation of the noise generating distribution. Suggested values are around 255 * 0.05 for uint8 images
- static fromtext(txt)
Create an AugAdditiveLaplaceNoise from a text description, e.g.:
a = AugAdditiveLaplaceNoise('std_dev=[12.5, 23.1]')
- class pyecvl.ecvl.AugAdditivePoissonNoise(lambda_)
Augmentation wrapper for AdditivePoissonNoise.
- Parameters
lambda_ – range of values
[min, max]
to randomly select the lambda of the noise generating distribution. Suggested values are around 0.0 to 10.0.
- static fromtext(txt)
Create an AugAdditivePoissonNoise from a text description, e.g.:
a = AugAdditivePoissonNoise('lambda=[2.0, 3.0]')
- class pyecvl.ecvl.AugGammaContrast(gamma)
Augmentation wrapper for GammaContrast.
- Parameters
gamma – range of values
[min, max]
to randomly select the exponent for the contrast adjustment. Suggested values are around 0.5 to 2.0.
- static fromtext(txt)
Create an AugGammaContrast from a text description, e.g.:
a = AugGammaContrast('gamma=[3, 4]')
- class pyecvl.ecvl.AugCoarseDropout(p, drop_size, per_channel)
Augmentation wrapper for CoarseDropout.
- Parameters
p – range of values
[min, max]
to randomly select the probability of any rectangle being set to zerodrop_size – range of values
[min, max]
to randomly select the size of rectangles in percentage of the input imageper_channel – probability of each image to use the same value for all channels of a pixel
- static fromtext(txt)
Create an AugCoarseDropout from a text description, e.g.:
a = AugCoarseDropout('p=[0.5, 0.7] drop_size=[0.1, 0.2] ' 'per_channel=0.4')
- class pyecvl.ecvl.AugTranspose(p=0.5)
Augmentation wrapper for Transpose.
- Parameters
p – probability of each image to get transposed.
- static fromtext(txt)
Create an AugTranspose from a text description, e.g.:
a = AugTranspose('p=0.5')
- class pyecvl.ecvl.AugBrightness(beta)
Augmentation wrapper for brightness adjustment.
- Parameters
beta – range of values
[min, max]
to randomly select from for the brightness adjustment. Suggested values are around 0 to 100.
- static fromtext(txt)
Create an AugBrightness from a text description, e.g.:
a = AugBrightness('beta=[30, 60]')
- class pyecvl.ecvl.AugGridDistortion(num_steps, distort_limit, interp=InterpolationType.linear, border_type=BorderType.BORDER_REFLECT_101, border_value=0)
Augmentation wrapper for GridDistortion.
- Parameters
num_steps – range of values
[min, max]
to randomly select the number of grid cells on each sidedistort_limit – range of values
[min, max]
to randomly select the distortion stepsinterp – InterpolationType to be used
border_type – pixel extrapolation method, see BorderType
border_value – padding value if border_type is BorderType.BORDER_CONSTANT
- static fromtext(txt)
Create an AugGridDistortion from a text description, e.g.:
a = AugGridDistortion('num_steps=[5,10] distort_limit=[-0.2,0.2] ' 'interp=\"linear\" ' 'border_type=\"reflect_101\" ' 'border_value=0')
- class pyecvl.ecvl.AugElasticTransform(alpha, sigma, interp=InterpolationType.linear, border_type=BorderType.BORDER_REFLECT_101, border_value=0)
Augmentation wrapper for ElasticTransform.
- Parameters
alpha – range of values
[min, max]
to randomly select the scaling factor that controls the intensity of the deformationsigma – range of values
[min, max]
to randomly select the gaussian kernel standard deviationinterp – InterpolationType to be used
border_type – pixel extrapolation method, see BorderType
border_value – padding value if border_type is BorderType.BORDER_CONSTANT
- static fromtext(txt)
Create an AugElasticTransform from a text description, e.g.:
a = AugElasticTransform('alpha=[34,60] sigma=[4,6] ' 'interp=\"linear\" ' 'border_type=\"reflect_101\" ' 'border_value=0')
- class pyecvl.ecvl.AugOpticalDistortion(distort_limit, shift_limit, interp=InterpolationType.linear, border_type=BorderType.BORDER_REFLECT_101, border_value=0)
Augmentation wrapper for OpticalDistortion.
- Parameters
distort_limit – range of values
[min, max]
to randomly select the distortion stepsshift_limit – range of values
[min, max]
to randomly select the image shiftinginterp – InterpolationType to be used
border_type – pixel extrapolation method, see BorderType
border_value – padding value if border_type is BorderType.BORDER_CONSTANT
- static fromtext(txt)
Create an AugOpticalDistortion from a text description, e.g.:
a = AugOpticalDistortion('distort_limit=[-0.2,0.2] ' 'shift_limit=[-0.4,0.4] ' 'interp=\"linear\" ' 'border_type=\"reflect_101\" ' 'border_value=0')
- class pyecvl.ecvl.AugSalt(p, per_channel)
Augmentation wrapper for Salt.
- Parameters
p – range of values
[min, max]
for the probability of any pixel to be set to whiteper_channel – probability to use the same value for all channels
- static fromtext(txt)
Create an AugSalt from a text description, e.g.:
a = AugSalt('p=[0.1,0.3] per_channel=0.5')
- class pyecvl.ecvl.AugPepper(p, per_channel)
Augmentation wrapper for Pepper.
- Parameters
p – range of values
[min, max]
for the probability of any pixel to be set to blackper_channel – probability to use the same value for all channels
- static fromtext(txt)
Create an AugPepper from a text description, e.g.:
a = AugPepper('p=[0.1,0.3] per_channel=0.5')
- class pyecvl.ecvl.AugSaltAndPepper(p, per_channel)
Augmentation wrapper for SaltAndPepper.
- Parameters
p – range of values
[min, max]
for the probability of any pixel to be set to white or blackper_channel – probability to use the same value for all channels
- static fromtext(txt)
Create an AugSaltAndPepper from a text description, e.g.:
a = AugSaltAndPepper('p=[0.1,0.3] per_channel=0.5')
- class pyecvl.ecvl.AugNormalize(mean, std)
Augmentation wrapper for Normalize.
For xyc input images,
mean
andstd
can be lists of floats, representing separate mean and standard deviation for each color channel.- Parameters
mean – mean to use for normalization
std – standard deviation to use for normalization
- static fromtext(txt)
Create an AugNormalize from a text description, e.g.:
a = AugNormalize('mean=20 std=5.5')
Separate mean and std for xyc images:
a = AugNormalize('mean=(20,19,21) std=(5,5.5,6)')
- class pyecvl.ecvl.AugCenterCrop(size=None)
Augmentation wrapper for CenterCrop.
If size is
None
, the crop size is inferred from the minimum image dimension.- Parameters
size – list of integers [w, h] specifying the output size
- static fromtext(txt)
Create an AugCenterCrop from a text description, e.g.:
a = AugCenterCrop('size=(10, 20)')
- class pyecvl.ecvl.AugToFloat32(divisor=1.0, divisor_gt=1.0)
Augmentation ToFloat32.
Converts an Image (and ground truth) to DataType::float32 dividing it by divisor (or divisor_gt) parameter.
- Parameters
divisor – divisor for the image
divisor – divisor for the ground truth
- static fromtext(txt)
Create an AugToFloat32 from a text description, e.g.:
a = AugToFloat32('divisor=2. divisor_gt=3.')
- class pyecvl.ecvl.AugDivBy255
Augmentation DivBy255.
Divides an Image (and ground truth) by 255.
- static fromtext(txt)
Create an AugDivBy255 from a text description, e.g.:
a = AugDivBy255('')
- class pyecvl.ecvl.AugScaleTo(new_min, new_max)
Augmentation wrapper for ScaleTo.
- Parameters
new_min – new minimum value
new_max – new maximum value
- static fromtext(txt)
Create an AugScaleTo from a text description, e.g.:
a = AugScaleTo('new_min=1 new_max=255')
- class pyecvl.ecvl.AugScaleFromTo(old_min, old_max, new_min, new_max)
Augmentation wrapper for ScaleFromTo.
- Parameters
old_min – old minimum value
old_max – old maximum value
new_min – new minimum value
new_max – new maximum value
- static fromtext(txt)
Create an AugScaleFromTo from a text description, e.g.:
a = AugScaleFromTo('old_min=0 old_max=255 new_min=1 new_max=254')
- class pyecvl.ecvl.AugRandomCrop(size, border_type=BorderType.BORDER_CONSTANT, border_value=0)
Augmentation wrapper for RandomCrop.
- Parameters
size – list of integers [w, h] specifying the output size
border_type – BorderType to use for pixel extrapolation if the desired size is bigger than the src image
border_value – pixel value for the border if
border_type
isBORDER_CONSTANT
- static fromtext(txt)
Create an AugRandomCrop from a text description, e.g.:
a = AugRandomCrop('size=(10, 20) border_type=\"constant\" border_value=0')
Image I/O
- class pyecvl.ecvl.ImReadMode(self: pyecvl._core.ecvl.ImReadMode, arg0: int)
Enum class representing the possible image read modes.
- ANYCOLOR = ImReadMode.ANYCOLOR
- COLOR = ImReadMode.COLOR
- GRAYSCALE = ImReadMode.GRAYSCALE
- UNCHANGED = ImReadMode.UNCHANGED
- pyecvl.ecvl.ImRead(filename, flags=None)
Load an image from a file.
- Parameters
filename – name of the input file
flags – an ImReadMode indicating how to read the image
- Returns
an Image object
- pyecvl.ecvl.ImWrite(filename, src)
Save an image to a file.
The image format is chosen based on the filename extension.
- Parameters
filename – name of the output file
src – Image to be saved
- Returns
None
- class pyecvl.ecvl.OpenSlideImage(filename)
Openslide image.
- Parameters
filename – path to the image file
- Close()
Close the file handle.
- Returns
None
- GetBestLevelForDownsample(downsample)
Get the best level to use for displaying the given downsample.
- Parameters
downsample – the desired downsample factor
- Returns
image level, or -1 if an error occurred.
- GetLevelCount()
Get the number of levels in the image.
- Returns
number of levels in the image
- GetLevelDownsamples()
Get the downsampling factor (or -1, if an error occurred) for each level.
- Returns
list of floats representing the downsampling factor for each level
- GetLevelsDimensions()
Get the width and height for each level of a whole-slide image.
- Returns
a list of pairs representing the width and height of each level
- GetProperties(dst)
Load metadata properties into an ECVL Image.
- Parameters
dst – ECVL Image
- Returns
None
- ReadRegion(level, dims)
Load a region of a whole-slide image.
Supported formats are those supported by the OpenSlide library. If the region cannot be read for any reason, the function creates an empty Image and returns False.
- Parameters
level – image level to read
dims –
[x, y, w, h]
list, where:x
andy
are the top left x and y coordinates in the level 0 reference frame;w
andh
are the width and height of the region.
- Returns
ECVL Image (RGB, with a “cxy” layout).
- pyecvl.ecvl.NiftiRead(filename)
Loads a nifti image from a file.
- Parameters
filename – image file name
- Returns
an Image object
- pyecvl.ecvl.NiftiWrite(filename, src)
Save an image to a file in the NIfTI-1 format.
- Parameters
filename – image file name
src – image to be saved
- Returns
None
- pyecvl.ecvl.DicomRead(filename)
Loads a dicom image from a file.
- Parameters
filename – image file name
- Returns
an Image object
- pyecvl.ecvl.DicomWrite(filename, src)
Save an image to a file in the DICOM format.
- Parameters
filename – image file name
src – image to be saved
- Returns
None
EDDL support
- class pyecvl.ecvl.DatasetAugmentations(augs)
Represents the augmentations which will be applied to each split.
- Parameters
augs – augmentations to be applied to each split.
- Apply(st, img, gt=None)
Apply augmentations for the specified split to an image.
- Parameters
st – a SplitType specifying which set of augmentations should be applied
img – image to which augmentations should be applied
gt – ground truth image to which augmentations should be applied
- Returns
None
- class pyecvl.ecvl.DLDataset(filename, batch_size, augs, ctype=ColorType.RGB, ctype_gt=ColorType.GRAY, num_workers=1, queue_ratio_size=1, drop_last=None, verify=False)
DeepHealth deep learning dataset.
- Variables
n_channels_ – number of image channels
n_channels_gt_ – number of ground truth image channels
resize_dims_ – dimensions
[H, W]
to which images must be resizedcurrent_batch_ – Number of batches already loaded for each split
ctype_ – ColorType of the images
ctype_gt_ – ColorType of the ground truth images
augs_ – augmentations to be applied to the images (and ground truth, if existing) for each split
tensors_shape_ – shape of the sample and label tensor
- Parameters
filename – path to the dataset file
batch_size – size of each dataset mini batch
augs – a DatasetAugmentations object specifying the training, validation and test augmentations to be applied to the dataset images (and ground truth if existing) for each split. Set to None if no augmentation is required
ctype – ColorType of the dataset images
ctype_gt – ColorType of the dataset ground truth images
num_workers – number of parallel threads spawned
queue_ratio_size – the producers-consumer queue will have a maximum size equal to
batch_size
xqueue_ratio_size
xnum_workers
drop_last – For each split, whether to drop the last samples that don’t fit the batch size (dictionary mapping split types to booleans)
verify – if True, verify image existence
- GetBatch()
Pop
batch_size
samples from the queue and copy them into tensors.- Returns
a tuple of three elements: a vector of samples; a tensor containing the image; a tensor containing the label.
- GetNumBatches(split=- 1)
Get the number of batches in the specified split.
By default, return the number of batches in the current split.
- Parameters
split – index, name or
SplitType
of the split from which to get the number of batches.
- GetQueueSize()
Get the current size of the producers-consumer queue.
- Returns
size of the queue
- LoadBatch(images, labels=None)
Load a batch into the images and labels tensors.
- Parameters
images – a Tensor to store the batch of images
labels – a Tensor to store the batch of labels
- ProduceImageLabel(augs, elem)
Load a sample and its label and push them to the producers-consumer queue.
- Parameters
augs – DatasetAugmentations to apply to the sample image
elem – Sample to load and push
- ResetAllBatches(shuffle=False)
Reset the batch counter of each split.
- Parameters
shuffle – whether to shuffle each split’s sample indices
- Returns
None
- ResetBatch(split=- 1, shuffle=False)
Reset the batch counter and optionally shuffle the split’s sample indices.
If a negative value is provided, the current split is reset (default).
- Parameters
split – index, name or SplitType of the split
shuffle – whether to shuffle the split’s sample indices
- Returns
None
- SetAugmentations(da)
Set the dataset augmentations.
- Parameters
da – DatasetAugmentations to set
- SetBatchSize(bs)
Set the batch size.
Note that this does not affect the EDDL network’s batch size.
- Parameters
bs – value of the batch size
- SetNumChannels(n_channels, n_channels_gt=1)
Change the number of channels of the image produced by ECVL and update the internal EDDL tensors shape accordingly.
Useful for custom data loading.
- Parameters
n_channels – number of channels of input image
n_channels_gt – number of channels of ground truth
- static SetSplitSeed(seed)
Set a fixed seed for the randomly generated values.
Useful to reproduce experiments with same shuffling during training.
- Parameters
seed – seed for the random engine
- SetWorkers(num_workers)
Set the number of workers.
- Parameters
num_workers – number of worker threads to spawn
- Start(split_index=- 1)
Spawn
num_workers
threads.- Parameters
split_index – index of the split for
GetBatch
(default = current)
- Stop()
Join all threads.
- ThreadFunc(thread_index)
Called when a thread is spawned.
ProduceImageLabel
is called for each sample handled by the thread.- Parameters
thread_index – index of the thread
- ToTensorPlane(label)
Convert the sample labels into a one-hot encoded tensor.
- Parameters
label – list of sample labels
- sleep_for(delta)
Block the execution of the current thread for the specified duration.
- Parameters
delta – a datetime.timedelta representing the sleep duration. If a different type is provided, conversion to a timedelta with seconds = delta will be attempted.
- class pyecvl.ecvl.ProducersConsumerQueue(*args, **kwargs)
Manages the producers-consumer queue of samples.
Overloaded function.
__init__(self: pyecvl._core.ecvl.ProducersConsumerQueue) -> None
__init__(self: pyecvl._core.ecvl.ProducersConsumerQueue, mxsz: int) -> None
__init__(self: pyecvl._core.ecvl.ProducersConsumerQueue, mxsz: int, thresh: int) -> None
- Clear()
Remove all elements from the queue.
- FreeLockedOnPush()
Free threads locked on a push operation.
To be used in
Stop
when the data loading process needs to be stopped before all the elements (batches) of the queue have been consumed.
- IsEmpty()
Check whether the queue is empty or not.
- Returns
True if the queue is empty, False otherwise
- IsFull()
Check whether the queue is full or not.
- Returns
True if the queue is full, False otherwise
- Length()
Return the current size of the queue.
- Returns
queue size
- Pop()
Pop a sample.
Lock the queue’s mutex, wait until the queue is not empty and pop a sample, image, label tuple from the queue.
- Returns
sample (as a Tensor), image (as a Tensor), label tuple
- Push(sample, image, label)
Push a sample.
Lock the queue’s mutex, wait until the queue is not full and push a sample, image, label tuple into the queue.
- Parameters
sample – sample to push
image – image (as a Tensor) to push
label – label (as a Tensor) to push
- SetSize(max_size, thresh=- 1)
Set the maximum size of the queue.
- Parameters
max_size – maximum queue size
thresh – threshold from which to restart producing samples. If not specified, it’s set to half the maximum size.
- class pyecvl.ecvl.ThreadCounters(*args)
Manages the thread counters.
Each thread manages its own indices.
- Variables
counter – index of the sample currently used by the thread
min – smallest sample index managed by the thread
max – largest sample index managed by the thread
Can be called with two or three arguments. In the former case, it sets
min_
andmax_
; in the latter, it setscounter_
,min_
andmax_
.- Reset()
Reset the thread counter to its minimum value.
- pyecvl.ecvl.ImageToTensor(img, t=None, offset=None)
Convert an ECVL Image to an EDDL Tensor.
The input image must have 3 dimensions “xy[czo]” (in any order). The output tensor’s dimensions will be C x H x W, where:
C = channels
H = height
W = width
If
t
andoffset
are not specified, a new tensor is created with the above shape. Otherwise, the specified image is inserted into the existingt
tensor at the specified offset. This allows to insert more than one image into a tensor, specifying how many images are already stored in it.- Parameters
img – input image
t – output tensor
offset – how many images are already stored in the tensor
- Returns
output tensor
- pyecvl.ecvl.TensorToImage(t)
Convert an EDDL Tensor into an ECVL Image.
Tensor dimensions must be C x H x W or N x C x H x W, where:
N = batch size
C = channels
H = height
W = width
The output image will be “xyo” with DataType.float32 and ColorType.none.
- Parameters
t – input tensor.
- Returns
output image
- pyecvl.ecvl.MakeGrid(t, cols=8, normalize=False)
Generate a grid of images from an EDDL tensor.
- Parameters
t – B x C x H x W Tensor
cols – number of images per row
normalize – If
True
, convert the image to the[0, 1]
range
- Returns
image grid as an Image