models
dummy
- class flatiron.torch.models.dummy.DummyConfig(**data)[source]
Bases:
BaseModel
- _abc_impl = <_abc._abc_data object>
-
input_channels:
int
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
-
output_channels:
int
- class flatiron.torch.models.dummy.DummyModel(input_channels, output_channels)[source]
Bases:
Module
- forward(x)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class flatiron.torch.models.dummy.DummyPipeline(config)[source]
Bases:
PipelineBase
- _abc_impl = <_abc._abc_data object>
unet
- class flatiron.torch.models.unet.AtttentionGate2DBlock(in_channels, filters=16, dtype=torch.float16)[source]
Bases:
Module
- forward(skip_connection, query)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class flatiron.torch.models.unet.Conv2DBlock(in_channels, filters=16, dtype=torch.float16)[source]
Bases:
Module
- forward(x)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class flatiron.torch.models.unet.UNet(in_channels=3, out_channels=1, attention=False, dtype=torch.float16)[source]
Bases:
Module
- forward(x)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class flatiron.torch.models.unet.UNetConfig(**data)[source]
Bases:
BaseModel
Configuration for UNet model.
- input_width
Input width.
- Type:
int
- input_height
Input height.
- Type:
int
- input_channels
Input channels.
- Type:
int
- classes
Number of output classes. Default: 1.
- Type:
int, optional
- filters
Number of filters for initial con 2d block. Default: 16.
- Type:
int, optional
- layers
Total number of layers. Default: 9.
- Type:
int, optional
- activation
Activation function to be used. Default: relu.
- Type:
KerasTensor, optional
- batch_norm
Use batch normalization. Default: True.
- Type:
KerasTensor, optional
- output_activation
Output activation function. Default: sigmoid.
- Type:
KerasTensor, optional
- kernel_initializer
Default: he_normal.
- Type:
KerasTensor, optional
- attention_gates
Use attention gates. Default: False.
- Type:
KerasTensor, optional
- attention_activation_1
First activation. Default: ‘relu’
- Type:
str, optional
- attention_activation_2
Second activation. Default: ‘sigmoid’
- Type:
str, optional
- attention_kernel_size
Kernel_size. Default: 1
- Type:
int, optional
- attention_strides
Strides. Default: 1
- Type:
int, optional
- attention_padding
Padding. Default: ‘same’
- Type:
str, optional
- attention_kernel_initializer
Kernel initializer. Default: ‘he_normal’
- Type:
str, optional
- _abc_impl = <_abc._abc_data object>
-
activation:
str
-
attention_activation_1:
str
-
attention_activation_2:
str
-
attention_gates:
bool
-
attention_kernel_initializer:
str
-
attention_kernel_size:
Annotated
[int
]
-
attention_padding:
Annotated
[str
]
-
attention_strides:
Annotated
[int
]
-
batch_norm:
bool
-
classes:
Annotated
[int
]
-
data_format:
str
-
dtype:
str
-
filters:
Annotated
[int
]
-
input_channels:
Annotated
[int
]
-
input_height:
Annotated
[int
]
-
input_width:
Annotated
[int
]
-
kernel_initializer:
str
-
layers:
Annotated
[int
]
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
-
output_activation:
str
- class flatiron.torch.models.unet.UNetPipeline(config)[source]
Bases:
PipelineBase
- _abc_impl = <_abc._abc_data object>