πŸ›°οΈ InSAR Phase Unwrapping - Pretrained Models

Pretrained weights for all four U-Net architectural variants from:

"When Less Is More: Simplicity Beats Complexity for Physics-Constrained InSAR Phase Unwrapping" Prabhjot Singh, Manmeet Singh Oral Presentation - ML4RS @ ICLR 2026

πŸ“„ Paper (OpenReview) | πŸ’» Code (GitHub)


πŸ” Key Result

Vanilla U-Net outperforms attention-based models by 34% in RΒ² with 2.5Γ— faster inference - convolutional locality aligns better with physics-constrained smooth deformation than global attention.


πŸ“¦ Model Files

File Architecture Params RΒ² RMSE (cm) Latency (ms)
vanilla_unet_model.pth Vanilla U-Net βœ… 7.76M 0.834 1.009 2.92
enhanced_unet_model.pth Enhanced U-Net (SE blocks) 8.29M 0.786 1.149 6.35
attention_unet_model.pth Attention U-Net 11.37M 0.622 1.528 7.08
hybrid_model.pth Hybrid Multi-Scale (ASPP) 17.21M 0.588 1.595 7.13

βœ… Vanilla U-Net is the recommended model - best performance, smallest size, fastest inference.


πŸš€ Usage

Step 1: Download model weights

# Install HF CLI
pip install huggingface_hub

# Download a specific model
from huggingface_hub import hf_hub_download
hf_hub_download(repo_id="Prabhjotschugh/InSAR-Phase-Unwrapping-Models", 
                filename="vanilla_unet_model.pth",
                local_dir="./models")

Step 2: Clone the code repo

git clone https://github.com/prabhjotschugh/When-Less-is-More-InSAR-Phase-Unwrapping.git
cd When-Less-is-More-InSAR-Phase-Unwrapping
pip install -r requirements.txt

Step 3: Load and run inference

import torch
import sys
sys.path.append('./train')  # path to training scripts
from train_vanilla_unet import VanillaInSAR_UNet

# Load checkpoint
checkpoint = torch.load('models/vanilla_unet_model.pth', map_location='cpu')

# Initialize model
model = VanillaInSAR_UNet(in_channels=6, out_channels=1, base_channels=32, dropout=0.0)
model.load_state_dict(checkpoint['model'])
model.eval()

# Load normalization stats
stats = checkpoint['stats']
print(f"Trained for {checkpoint['epoch']+1} epochs")
print(f"Best validation loss: {checkpoint['best_val_loss']:.5f}")

🧠 Model Input Format

Each model takes a 6-channel 128Γ—128 patch as input:

Channel Description
0 sin(wrapped phase)
1 cos(wrapped phase)
2 Interferometric coherence Ξ³
3 East LOS unit vector eβ‚‘
4 North LOS unit vector eβ‚™
5 Up LOS unit vector e_U

Output: Single-channel LOS displacement map in meters (after denormalization).

🌍 Training Details

  • Dataset: 350 LiCSAR interferograms (2020–2025), 20 frames, 6 continents
  • Patches: 39,724 Γ— 128Γ—128 patches (651M pixels total)
  • Hardware: NVIDIA GH200 GPU (120GB VRAM)
  • Framework: PyTorch 2.0+
  • Loss: Huber + spatial gradient penalty (Ξ»=0.1)

πŸ“œ Citation

@inproceedings{
  singh2026when,
  title={When Less Is More: Simplicity Beats Complexity for Physics-Constrained In{SAR} Phase Unwrapping},
  author={Prabhjot Singh and Manmeet Singh},
  booktitle={4th ICLR Workshop on Machine Learning for Remote Sensing (Main Track)},
  year={2026},
  url={https://openreview.net/forum?id=liJldeR5ZX}
}

πŸ“œ License

Models are released under CC BY 4.0.

"Domain physics, not architectural sophistication, should guide ML4RS design. Less is more." πŸ›°οΈ

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support