@Namespace(value="cv::dnn") @NoOffset @Properties(inherit=opencv_dnn.class) public class Layer extends Algorithm
Pointer.CustomDeallocator, Pointer.Deallocator, Pointer.NativeDeallocator, Pointer.ReferenceCounter| Constructor and Description |
|---|
Layer() |
Layer(LayerParams params)
Initializes only #name, #type and #blobs fields.
|
Layer(long size)
Native array allocator.
|
Layer(Pointer p)
Pointer cast constructor.
|
| Modifier and Type | Method and Description |
|---|---|
void |
applyHalideScheduler(BackendNode node,
MatPointerVector inputs,
MatVector outputs,
int targetId)
\brief Automatic Halide scheduling based on layer hyper-parameters.
|
MatVector |
blobs()
List of learned parameters must be stored here to allow read them by using Net::getParam().
|
Layer |
blobs(MatVector setter) |
void |
finalize(GpuMatVector inputs,
GpuMatVector outputs) |
void |
finalize(MatPointerVector input,
MatVector output)
Deprecated.
Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
|
MatVector |
finalize(MatVector inputs)
Deprecated.
Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
|
void |
finalize(MatVector inputs,
MatVector outputs)
\brief Computes and sets internal parameters according to inputs, outputs and blobs.
|
void |
finalize(UMatVector inputs,
UMatVector outputs) |
void |
forward_fallback(GpuMatVector inputs,
GpuMatVector outputs,
GpuMatVector internals) |
void |
forward_fallback(MatVector inputs,
MatVector outputs,
MatVector internals)
\brief Given the \p input blobs, computes the output \p blobs.
|
void |
forward_fallback(UMatVector inputs,
UMatVector outputs,
UMatVector internals) |
void |
forward(GpuMatVector inputs,
GpuMatVector outputs,
GpuMatVector internals) |
void |
forward(MatPointerVector input,
MatVector output,
MatVector internals)
Deprecated.
Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
|
void |
forward(MatVector inputs,
MatVector outputs,
MatVector internals)
\brief Given the \p input blobs, computes the output \p blobs.
|
void |
forward(UMatVector inputs,
UMatVector outputs,
UMatVector internals) |
long |
getFLOPS(MatShapeVector inputs,
MatShapeVector outputs) |
boolean |
getMemoryShapes(MatShapeVector inputs,
int requiredOutputs,
MatShapeVector outputs,
MatShapeVector internals) |
void |
getScaleShift(Mat scale,
Mat shift)
\brief Returns parameters of layers with channel-wise multiplication and addition.
|
int |
inputNameToIndex(BytePointer inputName)
\brief Returns index of input blob into the input array.
|
int |
inputNameToIndex(String inputName) |
BytePointer |
name()
Name of the layer instance, can be used for logging or other internal purposes.
|
Layer |
name(BytePointer setter) |
int |
outputNameToIndex(BytePointer outputName)
\brief Returns index of output blob in output array.
|
int |
outputNameToIndex(String outputName) |
Layer |
position(long position) |
int |
preferableTarget()
prefer target for layer forwarding
|
Layer |
preferableTarget(int setter) |
void |
run(MatVector inputs,
MatVector outputs,
MatVector internals)
Deprecated.
This method will be removed in the future release.
|
boolean |
setActivation(ActivationLayer layer)
\brief Tries to attach to the layer the subsequent activation layer, i.e.
|
void |
setParamsFrom(LayerParams params)
Initializes only #name, #type and #blobs fields.
|
boolean |
supportBackend(int backendId)
\brief Ask layer if it support specific backend for doing computations.
|
BackendNode |
tryAttach(BackendNode node)
\brief Implement layers fusing.
|
boolean |
tryFuse(Layer top)
\brief Try to fuse current layer with a next one
|
BytePointer |
type()
Type name which was used for creating layer by layer factory.
|
Layer |
type(BytePointer setter) |
void |
unsetAttached()
\brief "Deattaches" all the layers, attached to particular layer.
|
clear, empty, getDefaultName, read, save, save, write, write, writeaddress, asBuffer, asByteBuffer, availablePhysicalBytes, calloc, capacity, capacity, close, deallocate, deallocate, deallocateReferences, deallocator, deallocator, equals, fill, formatBytes, free, hashCode, isNull, isNull, limit, limit, malloc, maxBytes, maxPhysicalBytes, memchr, memcmp, memcpy, memmove, memset, offsetof, parseBytes, physicalBytes, position, put, realloc, referenceCount, releaseReference, retainReference, setNull, sizeof, toString, totalBytes, totalPhysicalBytes, withDeallocator, zeropublic Layer(Pointer p)
Pointer.Pointer(Pointer).public Layer(long size)
Pointer.position(long).public Layer()
public Layer(@Const @ByRef LayerParams params)
@ByRef public MatVector blobs()
@Deprecated public void finalize(@Const @ByRef MatPointerVector input, @ByRef MatVector output)
input - [in] vector of already allocated input blobsoutput - [out] vector of already allocated output blobs
If this method is called after network has allocated all memory for input and output blobs
and before inferencing.public void finalize(@ByVal MatVector inputs, @ByVal MatVector outputs)
inputs - [in] vector of already allocated input blobsoutputs - [out] vector of already allocated output blobs
If this method is called after network has allocated all memory for input and output blobs
and before inferencing.public void finalize(@ByVal UMatVector inputs, @ByVal UMatVector outputs)
public void finalize(@ByVal GpuMatVector inputs, @ByVal GpuMatVector outputs)
@Deprecated public void forward(@ByRef MatPointerVector input, @ByRef MatVector output, @ByRef MatVector internals)
input - [in] the input blobs.output - [out] allocated output blobs, which will store results of the computation.internals - [out] allocated internal blobspublic void forward(@ByVal MatVector inputs, @ByVal MatVector outputs, @ByVal MatVector internals)
inputs - [in] the input blobs.outputs - [out] allocated output blobs, which will store results of the computation.internals - [out] allocated internal blobspublic void forward(@ByVal UMatVector inputs, @ByVal UMatVector outputs, @ByVal UMatVector internals)
public void forward(@ByVal GpuMatVector inputs, @ByVal GpuMatVector outputs, @ByVal GpuMatVector internals)
public void forward_fallback(@ByVal MatVector inputs, @ByVal MatVector outputs, @ByVal MatVector internals)
inputs - [in] the input blobs.outputs - [out] allocated output blobs, which will store results of the computation.internals - [out] allocated internal blobspublic void forward_fallback(@ByVal UMatVector inputs, @ByVal UMatVector outputs, @ByVal UMatVector internals)
public void forward_fallback(@ByVal GpuMatVector inputs, @ByVal GpuMatVector outputs, @ByVal GpuMatVector internals)
@Deprecated @ByVal public MatVector finalize(@Const @ByRef MatVector inputs)
@Deprecated public void run(@Const @ByRef MatVector inputs, @ByRef MatVector outputs, @ByRef MatVector internals)
public int inputNameToIndex(@opencv_core.Str BytePointer inputName)
inputName - label of input blob
Each layer input and output can be labeled to easily identify them using "%public int inputNameToIndex(@opencv_core.Str String inputName)
public int outputNameToIndex(@opencv_core.Str BytePointer outputName)
inputNameToIndex()public int outputNameToIndex(@opencv_core.Str String outputName)
@Cast(value="bool") public boolean supportBackend(int backendId)
backendId - [in] computation backend identifier.Backendpublic void applyHalideScheduler(@opencv_core.Ptr BackendNode node, @Const @ByRef MatPointerVector inputs, @Const @ByRef MatVector outputs, int targetId)
node - [in] Backend node with Halide functions.inputs - [in] Blobs that will be used in forward invocations.outputs - [in] Blobs that will be used in forward invocations.targetId - [in] Target identifierBackendNode, Target
Layer don't use own Halide::Func members because we can have applied
layers fusing. In this way the fused function should be scheduled.@opencv_core.Ptr public BackendNode tryAttach(@opencv_core.Ptr BackendNode node)
node - [in] Backend node of bottom layer.Actual for graph-based backends. If layer attached successfully,
returns non-empty cv::Ptr to node of the same backend.
Fuse only over the last function.@Cast(value="bool") public boolean setActivation(@opencv_core.Ptr ActivationLayer layer)
layer - [in] The subsequent activation layer.
Returns true if the activation layer has been attached successfully.@Cast(value="bool") public boolean tryFuse(@opencv_core.Ptr Layer top)
top - [in] Next layer to be fused.public void getScaleShift(@ByRef Mat scale, @ByRef Mat shift)
scale - [out] Channel-wise multipliers. Total number of values should
be equal to number of channels.shift - [out] Channel-wise offsets. Total number of values should
be equal to number of channels.
Some layers can fuse their transformations with further layers.
In example, convolution + batch normalization. This way base layer
use weights from layer after it. Fused layer is skipped.
By default, \p scale and \p shift are empty that means layer has no
element-wise multiplications or additions.public void unsetAttached()
@Cast(value="bool") public boolean getMemoryShapes(@Const @ByRef MatShapeVector inputs, int requiredOutputs, @ByRef MatShapeVector outputs, @ByRef MatShapeVector internals)
@Cast(value="int64") public long getFLOPS(@Const @ByRef MatShapeVector inputs, @Const @ByRef MatShapeVector outputs)
@opencv_core.Str public BytePointer name()
public Layer name(BytePointer setter)
@opencv_core.Str public BytePointer type()
public Layer type(BytePointer setter)
public int preferableTarget()
public Layer preferableTarget(int setter)
public void setParamsFrom(@Const @ByRef LayerParams params)
Copyright © 2020. All rights reserved.