Synthetic aperture radar images for cGAN training

Citation Author(s):
Jaime
Laviada
Universidad de Oviedo
Guillermo
Álvarez-Narciandi
Universidad de Oviedo
Fernando
Las-Heras
Universidad de Oviedo
Submitted by:
Jaime Laviada
Last updated:
Wed, 03/16/2022 - 13:19
DOI:
10.21227/5gzt-cm67
License:
0
0 ratings - Please login to submit your rating.

Abstract 

Some novel methods for imaging based on synthetic aperture radar can result in images contaminated by artifacts as a consequence of pushing the limits of the algorithms. In order to mitigate the impact of this artifacts, image translation techniques can be exploited enabling to turn the SAR image into a cleaner one. For this purpose, multiple techniques can be used such as convolutional neural networks or generative adversial networks. However, the training of those systems can require a high number of images, which can be computationally expensive to generate. In this dataset, training images for two different problems are provided. The first one is related to cylindrically multilayered problems (i.e., pipe-like geometries) and the second one is related to sparse and irregular sampling such as the one in freehand approaches. In both cases, ideal output as well real output are provided.

Instructions: 

This dataset includes serveral images generated by means of two advanced synthetic aperture radar techniques described in the following papers:

[1] J. Laviada, B. Wu, M. T. Ghasr, and R. Zoughi, “Nondestructive evaluation of microwave-penetrable pipes by synthetic aperture imaging enhanced by full-wave field propagation model,” IEEE Transactions on Instrumentation and Measurement, vol. 68, no. 4, pp. 1112–1119, 2019

[2] G. Álvarez-Narciandi, M. López-Portugués, F. Las-Heras, and J. Laviada, “Freehand, agile, and high-resolution imaging with compact mm-wave radar,” IEEE Access, vol. 7, pp. 95 516–95 526, 2019.

The following data is included in this dataset:

1) Training data for pipe-like structures.

2) Training data for scissor-like structures

In both cases, two folders are included. The folder 'A' includes the target images and the folder 'B' includes the SAR images. In the case of the pipe-like structures the images are two channel images containing the real part of the reflectivity (channel 'R') and the imaginary part of the reflectivity (channel 'G'). The channel 'B' does not contain information. In the case of the scissor-like structures the images are single channel images containing the modulus of the reflectivity. 

On one hand, the pipe-like structures images were obtained by using simulations of point-like targets by means of the corresponding Green's function. In this regard, a monostatic setup at a distance of 25 cm from the center. The number of equally-spaced angles is set to 420. In this setup, only XY slices are considered. The total number of frequencies used to build the image were 51 covering the X band (8.2 GHz-12.4 GHz).

On the other hand, the freehand setup was simulated considering the same topology as in [3] entailing 2 transmitters and 4 receivers sweeping from 57 GHz to 63 GHz. The scanning plane was at 5cm and limited to a size of 20cmx20cm. A random deviation of 1mm was considered from an ideal equally-spaced grid with cells of 2.5mmx2.5mm. In addition, rotations up to 5deg. were also considered. A 50% of the samples were discarded. In addition, noise was added so that the signal-to-noise ratio was 20 dB.

[3]  G. Álvarez-Narciandi, J. Laviada, and F. Las-Heras, “Freehand mm-wave imaging with a compact MIMO radar,” IEEE Transactions on Antennas and Propagation, vol. 69, no. 2, pp. 1224–1229, 2021.

In order to train the network pix2pix or other port can be used:

[4] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” 2018, [Online]. Available: arXiv:1611.07004.

[5]  J. Pinkney, “Matlab implementation of pix2pix,” 2021, [Online]. Available: https://github.com/matlab-deep-learning/pix2pix.

Funding Agency: 
Ministerio de Ciencia, Innovación y Universidades of Spain (Project RTI2018-095825-B-I00)