MIR.models.HyperTransMorph
Hyper-TransMorph model Chen, J., Du, Y., He, Y., Segars, W. P., Li, Y., & Frey, E. C. (2021). TransMorph: Transformer for unsupervised medical image registration. arXiv preprint arXiv:2111.10480. Swin-Transformer code retrieved from: https://github.com/SwinTransformer/Swin-Transformer-Semantic-Segmentation Original paper: Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., … & Guo, B. (2021). Swin transformer: Hierarchical vision transformer using shifted windows. arXiv preprint arXiv:2103.14030. Modified and tested by: Junyu Chen jchen245@jhmi.edu Johns Hopkins University
Functions
|
|
|
Classes
|
A basic Swin Transformer layer for one stage. |
|
|
|
Hyper-conditioned convolution with weights predicted from parameters. |
|
|
|
MLP that embeds hyperparameters for hypernetwork conditioning. |
|
Linear layer with weights predicted from hyperparameters. |
|
|
|
TransMorph TVF with Spatially-varying regularization :param config: Configuration object containing model parameters :param time_steps: Number of time steps for progressive registration :param SVF: Boolean indicating whether to use SVF (Time Stationary Velocity Field) integration :param SVF_steps: Number of steps for SVF integration :param composition: Type of composition for flow integration ('composition' or 'addition') :param swin_type: Type of Swin Transformer to use ('swin' or 'dswin') |
|
Feed-forward MLP block used inside transformer blocks. |
|
Image to Patch Embedding :param patch_size: Patch token size. |
|
Patch Merging Layer. |
|
|
|
|
|
Rotary Position Embedding |
|
Swin Transformer |
|
Swin Transformer Block. |
|
Window based multi-head self attention (W-MSA) module with relative position bias. |