models

dummy

class flatiron.tf.models.dummy.DummyConfig(**data)[source]

Bases: BaseModel

_abc_impl = <_abc._abc_data object>
activation: str
model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

shape: list[int]
class flatiron.tf.models.dummy.DummyPipeline(config)[source]

Bases: PipelineBase

_abc_impl = <_abc._abc_data object>
model_config()[source]

Subclasses of PipelineBase will need to define a config class for models created in the build method.

Returns:

Pydantic BaseModel config class.

Return type:

BaseModel

model_func()[source]

Subclasses of PipelineBase need to define a function that builds and returns a machine learning model.

Returns:

Machine learning model.

Return type:

object

flatiron.tf.models.dummy.get_dummy_model(shape, activation='relu')[source]

unet

class flatiron.tf.models.unet.UNetConfig(**data)[source]

Bases: BaseModel

Configuration for UNet model.

input_width

Input width.

Type:

int

input_height

Input height.

Type:

int

input_channels

Input channels.

Type:

int

classes

Number of output classes. Default: 1.

Type:

int, optional

filters

Number of filters for initial con 2d block. Default: 16.

Type:

int, optional

layers

Total number of layers. Default: 9.

Type:

int, optional

activation

Activation function to be used. Default: relu.

Type:

KerasTensor, optional

batch_norm

Use batch normalization. Default: True.

Type:

KerasTensor, optional

output_activation

Output activation function. Default: sigmoid.

Type:

KerasTensor, optional

kernel_initializer

Default: he_normal.

Type:

KerasTensor, optional

attention_gates

Use attention gates. Default: False.

Type:

KerasTensor, optional

attention_activation_1

First activation. Default: ‘relu’

Type:

str, optional

attention_activation_2

Second activation. Default: ‘sigmoid’

Type:

str, optional

attention_kernel_size

Kernel_size. Default: 1

Type:

int, optional

attention_strides

Strides. Default: 1

Type:

int, optional

attention_padding

Padding. Default: ‘same’

Type:

str, optional

attention_kernel_initializer

Kernel initializer. Default: ‘he_normal’

Type:

str, optional

_abc_impl = <_abc._abc_data object>
activation: str
attention_activation_1: str
attention_activation_2: str
attention_gates: bool
attention_kernel_initializer: str
attention_kernel_size: Annotated[int]
attention_padding: Annotated[str]
attention_strides: Annotated[int]
batch_norm: bool
classes: Annotated[int]
data_format: str
dtype: str
filters: Annotated[int]
input_channels: Annotated[int]
input_height: Annotated[int]
input_width: Annotated[int]
kernel_initializer: str
layers: Annotated[int]
model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

output_activation: str
class flatiron.tf.models.unet.UNetPipeline(config)[source]

Bases: PipelineBase

_abc_impl = <_abc._abc_data object>
model_config()[source]

Subclasses of PipelineBase will need to define a config class for models created in the build method.

Returns:

Pydantic BaseModel config class.

Return type:

BaseModel

model_func()[source]

Subclasses of PipelineBase need to define a function that builds and returns a machine learning model.

Returns:

Machine learning model.

Return type:

object

flatiron.tf.models.unet.attention_gate_2d(query, skip_connection, activation_1='relu', activation_2='sigmoid', kernel_size=1, strides=1, padding='same', kernel_initializer='he_normal', name='attention-gate', dtype='float16', data_format='channels_last')[source]

Attention gate for 2D inputs. See: https://arxiv.org/abs/1804.03999

Parameters:
  • query (KerasTensor) – 2D Tensor of query.

  • skip_connection (KerasTensor) – 2D Tensor of features.

  • activation_1 (str, optional) – First activation. Default: ‘relu’

  • activation_2 (str, optional) – Second activation. Default: ‘sigmoid’

  • kernel_size (int, optional) – Kernel_size. Default: 1

  • strides (int, optional) – Strides. Default: 1

  • padding (str, optional) – Padding. Default: ‘same’

  • kernel_initializer (str, optional) – Kernel initializer. Default: ‘he_normal’

  • name (str, optional) – Layer name. Default: attention-gate

  • dtype (str, optional) – Model dtype. Default: float16.

  • data_format (str, optional) – Model data format. Default: channels_last.

Returns:

2D Attention Gate.

Return type:

KerasTensor

flatiron.tf.models.unet.conv_2d_block(input_, filters=16, activation='relu', batch_norm=True, kernel_initializer='he_normal', name='conv-2d-block', dtype='float16', data_format='channels_last')[source]

2D Convolution block without padding.

\begin{align} architecture & \rightarrow Conv2D + ReLU + BatchNorm + Conv2D + ReLU + BatchNorm \\ kernel & \rightarrow (3, 3) \\ strides & \rightarrow (1, 1) \\ padding & \rightarrow same \\ \end{align}
_images/conv_2d_block.svg
Parameters:
  • input (KerasTensor) – Input tensor.

  • filters (int, optional) – Default: 16.

  • activation (str, optional) – Activation function. Default: relu.

  • batch_norm (str, bool) – Default: True.

  • kernel_initializer (str, optional) – Default: he_normal.

  • name (str, optional) – Layer name. Default: conv-2d-block

  • dtype (str, optional) – Model dtype. Default: float16.

  • data_format (str, optional) – Model data format. Default: channels_last.

Returns:

Conv2D Block

Return type:

KerasTensor

flatiron.tf.models.unet.get_unet_model(input_width, input_height, input_channels, classes=1, filters=32, layers=9, activation='leaky_relu', batch_norm=True, output_activation='sigmoid', kernel_initializer='he_normal', attention_gates=False, attention_activation_1='relu', attention_activation_2='sigmoid', attention_kernel_size=1, attention_strides=1, attention_padding='same', attention_kernel_initializer='he_normal', dtype='float16', data_format='channels_last')[source]

UNet model for 2D semantic segmentation.

see: https://arxiv.org/abs/1505.04597 see: https://arxiv.org/pdf/1411.4280.pdf see: https://arxiv.org/abs/1804.03999

Parameters:
  • input_width (int) – Input width.

  • input_height (int) – Input height.

  • input_channels (int) – Input channels.

  • classes (int, optional) – Number of output classes. Default: 1.

  • filters (int, optional) – Number of filters for initial con 2d block. Default: 32.

  • layers (int, optional) – Total number of layers. Default: 9.

  • activation (KerasTensor, optional) – Activation function to be used. Default: leaky_relu.

  • batch_norm (KerasTensor, optional) – Use batch normalization. Default: True.

  • output_activation (KerasTensor, optional) – Output activation function. Default: sigmoid.

  • kernel_initializer (KerasTensor, optional) – Default: he_normal.

  • attention_gates (KerasTensor, optional) – Use attention gates. Default: False.

  • attention_activation_1 (str, optional) – First activation. Default: ‘relu’

  • attention_activation_2 (str, optional) – Second activation. Default: ‘sigmoid’

  • attention_kernel_size (int, optional) – Kernel_size. Default: 1

  • attention_strides (int, optional) – Strides. Default: 1

  • attention_padding (str, optional) – Padding. Default: ‘same’

  • attention_kernel_initializer (str, optional) – Kernel initializer. Default: ‘he_normal’

  • dtype (str, optional) – Model dtype. Default: float16.

  • data_format (str, optional) – Model data format. Default: channels_last.

Raises:
  • EnforceError – If input_width is not even.

  • EnforceError – If input_height is not even.

  • EnforceError – If layers is not an odd integer greater than 2.

  • EnforceError – If input_width and layers are not compatible.

Returns:

UNet model.

Return type:

tfmodels.Model

flatiron.tf.models.unet.unet_width_and_layers_are_valid(width, layers)[source]

Determines whether given UNet width and layers are valid.

Parameters:
  • width (int) – UNet input width.

  • layers (int) – Number of UNet layers.

Returns:

True if width and layers are compatible.

Return type:

bool