Processing math: 100%
Integrated Self-Contained Trajectory Estimation and Multistatic SAR Imaging in a Non-Static Uncoupled Bistatic Radar Network | IEEE Journals & Magazine | IEEE Xplore

Integrated Self-Contained Trajectory Estimation and Multistatic SAR Imaging in a Non-Static Uncoupled Bistatic Radar Network


Abstract:

Radar imaging performance can be significantly improved by creating synthetic apertures along a radar sensor's trajectory compared to standard MIMO imaging radars. Additi...Show More

Abstract:

Radar imaging performance can be significantly improved by creating synthetic apertures along a radar sensor's trajectory compared to standard MIMO imaging radars. Additionally, observing the scenery from both monostatic and bistatic perspectives with large bistatic angles can further increase the information content of radar images, as different parts of complex targets can exhibit different scattering mechanisms. Both technologies, synthetic aperture radar and coherent multistatic radar networks, come with demanding system requirements regarding the localization and synchronization of the involved radars, which are addressed by the proposed approach. The unique aspect of our novel bi-/multistatic radar approach is that no auxiliary sensor technology is needed to determine the trajectory. The same radar signals are jointly used at the same time for trajectory determination, clock synchronization, and bistatic SAR imaging. The integrated self-contained trajectory estimation is based on a particle filter algorithm that processes the line-of-sight radar signals of the bistatic radar pairs, which are exchanged in a double-sided two-way ranging manner. This approach opens up new applications of bi-/multistatic radar for autonomous air and ground vehicles. However, the requirement of a line-of-sight connection between the radar pairs imposes a constraint on possible bistatic constellations and trajectories. Therefore, it is shown that suitable compromises regarding the geometry, localization accuracy, and resolution of SAR imaging must also be taken into account. We demonstrate the capabilities of this approach by generating monostatic and bistatic SAR images with 77 GHz SIMO FMCW radar sensors from indoor and outdoor measurement scenarios with synthetically generated apertures estimated by the integrated self-contained localization algorithm.
Published in: IEEE Journal of Microwaves ( Volume: 5, Issue: 3, May 2025)
Page(s): 600 - 615
Date of Publication: 15 April 2025
Electronic ISSN: 2692-8388

Funding Agency:


CCBY - IEEE is not the copyright holder of this material. Please follow the instructions via https://creativecommons.org/licenses/by/4.0/ to obtain full-text articles and stipulations in the API documentation.
SECTION I.

Introduction

The concept of bistatic synthetic aperture radar (BiSAR) is already prominent in the field of spaceborne SAR, as shown in [1] and [2], as well as in applications in airborne SAR [3], [4]. In addition, combinations of spaceborne and airborne SAR systems have been utilized [5]. BiSAR systems have the potential to increase geometric diversity [6], [7], so the investigated area can be observed from different bistatic angles, increasing the information about the image scene by utilizing the angle-dependent bistatic radar cross-section (RCS) [8]. Recent developments show an increasing interest in the deployment of SAR on drone-based platforms, known as UAV-borne SAR [9], [10], [11], [12], especially for multistatic radar constellations with multiple platforms [10], [13], [14]. Such systems are more flexible for the investigation of a specific scene of interest and less expensive than satellite and airborne-based platforms. Another upcoming development is the use of BiSAR for automotive applications [15]. However, the establishment of UAV-borne or automotive BiSAR raises new questions, which have not yet been fully solved and are subject to ongoing research.

In a bistatic constellation with two independently moving radar nodes, it is important to ensure spatial and temporal coherence between the transmitter (TX) and the receiver (RX) while sampling the image scene at the different positions along the SAR trajectory [16]. Spatial coherence means that the antenna positions must be known precisely with an accuracy of \lambda _{0}/4 for coherent radar data processing [17]. Depending on the system's carrier frequency, this implies an accuracy in the cm to mm range. A localization accuracy in the cm range can be achieved, for example, with global navigation satellite systems (GNSS) supported by real-time kinematic (RTK) in combination with inertial navigation systems (INS), which use the measurements of an inertial measurement unit (IMU) [18]. The requirement of additional sensors comes with high costs and increased integration and processing efforts. In addition, for typical automotive frequencies in the 77GHz region, a relative positioning accuracy of a few millimeters is required, which cannot be achieved by conventional GNSS- and INS-based solutions. In [17] and [19], a simultaneous localization and mapping (SLAM) based trajectory estimation is presented, which enables SAR processing for automotive radar sensors, but this approach is still limited to static radar networks mounted on a common platform.

Additionally and in contrast to monostatic SAR systems or distributed systems that share a common reference, time and phase coherence between the radar nodes must be established to process the bistatic signals because each radar has its own reference oscillator [20]. Spaceborne and airborne systems use ultra stable oscillators [21]. Those relax the requirements of the synchronization link, allowing a pulsed alternated scheme [22], but they are more expensive than conventional quartz oscillators. Therefore, this is not a proper solution for UAV or automotive radars, as the required quantities are much bigger and unit price is a limiting factor. Wireless synchronization approaches to automotive radar networks are presented in [23] and [24]. These works target the case of radars mounted on a common platform (i.e., a car) and cannot be applied to non-static scenarios in which the nodes move independently. In [25], a synchronization approach for a phase-modulated continuous wave (PMCW) signal is presented and suggested for use in multistatic SAR applications. A solution that does not require a presynchronization is proposed in [26] by the use of a stationary repeater node. Both approaches lack the opportunity for radar-based self-localization.

Combined approaches for the synchronization and localization of uncoupled bi-/multistatic radar networks with independently moving nodes are presented in [15], [28]. The method in [28] utilizes an additional degree of freedom per target in the unsynchronized radar data of a 2D localization scenario to estimate unknown antenna positions and timing offsets [28]. The simulated performance was in the decimeter range for the localization and in the range of nanoseconds for time synchronization, which is not suited for BiSAR processing. Tagliaferri et al. propose a problem solution in [15] that utilizes coarse GNSS estimates in combination with a coregistration of the monostatic images and the processing of common features in all images to obtain precise position estimate and phase synchronization [15]. Hence, a coherent processing of the mono- and bistatic images was achieved, but the method was showcased for SAR trajectories limited to 9cm.

Our previous work in [27] presented a synchronization approach to establish phase coherency in an uncoupled bistatic radar network using the direct path between the radar pair in the bistatic network as reference link. This has enabled coherent bistatic processing of frequency modulated continuous wave (FMCW) chirp-sequence waveforms. We will extend this approach by an integrated self-contained localization and tracking algorithm which uses the same line-of-sight radar signal that is used for phase synchronization.

In [29], an extended Kalman filter (EKF) algorithm was introduced to process the line-of-sight (LoS) signals along multiple frames with a localization accuracy of less than 1cm. This filter was able to process only phase differences, which yield the angle information of the incoming waves at both radar nodes. A processing of the absolute phases would give precise information about the relative displacement between the antenna positions of successive chirps in fractions of the wavelength [30]. However, such absolute phase measurements are highly ambiguous due to the 2\pi wrapping of the phases. This paper proposes a particle filter-based approach for the self-contained localization capable of dealing with the difficulty of multi-modal measurement functions [31]. With this precise tracking algorithm, it is possible to jointly synchronize the uncoupled bistatic radar network, estimate the antenna positions along the synthetic aperture and generate multistatic SAR images with the same radar data given a geometric constellation line-of-sight connection between the radars.

The paper is organized as follows: The systems architecture and applied system models are introduced in Section II. In Section III, the characteristics of near-field BiSAR systems are investigated. The proposed particle filter (PF) is explained in detail, and its performance is evaluated in Section IV. Finally, a short explanation of the subsequent steps to synchronize the bistatic data is given and we demonstrate the performance of the proposed algorithm by applying the self-constraint trajectory estimation algorithm to indoor and outdoor bistatic SAR measurements with uncoupled non-static radar nodes in Section V and generate the multistatic SAR images.

SECTION II.

System Model

The concept of integrated self-contained trajectory estimation involves using radar sensors jointly for radar self-localization, synchronization, and multistatic SAR imaging in a non-static, uncoupled bistatic radar network. Fig. 1 shows a block diagram of this bistatic network setup. For the concept to work, a line-of-sight link between the radar pairs is necessary. This requirement restricts possible geometric constellations to those where each radar's field of view includes the respective other radar and the target that should be recorded at all positions along the mobile radar's synthetic aperture. HAlthough this constraint may result in geometries that are not optimal for BiSAR resolution, our approach offers significant benefits for multi-platform multistatic radar systems like a reduction of required hardware components and integration effort. The contributions of this work are:

  • theoretical investigation of near-field BiSAR resolution

  • assessment of velocity errors in the trajectory estimation on BiSAR processing

  • simulation-based evaluation of the bistatic RCS of complex target

  • novel integrated self-contained tracking algorithm for trajectory estimation in an uncoupled radar network

  • enable simultaneous monostatic and bistatic SAR processing refered to as multistatic processing in the paper

  • experimental demonstration of self-contained radar-based synchronization, trajectory estimation and multistatic SAR imaging

We will restrict the considerations to two-dimensional geometries with FMCW SIMO radar sensors, comprising one TX antenna and N_{\text{RX}} RX antennas. The geometric relationships are detailed in the next section, followed by a signal model that describes the initially incoherent bistatic radar systems, based on the deviations from [27].
Figure 1. - Sketch of the bistatic SIMO radar setup, where radar node 2 spans a SAR trajectory, which is indicated by the dashed green line. The second radars relative 2D position is given by the coordinates $x_{\mathrm{R}}$ and $y_{\mathrm{R}}$ and its orientation is described by the tilt angle $\gamma$. The red box indicates the imaging scene.
Figure 1.

Sketch of the bistatic SIMO radar setup, where radar node 2 spans a SAR trajectory, which is indicated by the dashed green line. The second radars relative 2D position is given by the coordinates x_{\mathrm{R}} and y_{\mathrm{R}} and its orientation is described by the tilt angle \gamma. The red box indicates the imaging scene.

A. Near-Field Bistatic SAR Geometry

During the paper, one radar node is assumed to be static, while the other node is assumed to be moving with a constant velocity \vec{v}_{\mathrm{R}}. A block diagram of the proposed setup is shown in Fig. 1. The origin of the common coordinate frame is set at the position of the first RX antenna of the static radar node, so the position of its TX antenna \vec{p}_{\text{TX},1} and the n_{\mathrm{A}}-th RX antenna \vec{p}_{\text{RX},n_{\mathrm{A}},1} can be derived from the known array geometry. Given the tilt angle \gamma and the position of the second radar \vec{p}_{\mathrm{R},2}=(x_{\mathrm{R}},y_{\mathrm{R}})^{\mathrm{T}}, its antenna positions can be calculated by \begin{align*} \vec{p}_{\text{TX},2} &= \begin{pmatrix}\cos \left(\gamma \right) & -\sin \left(\gamma \right)\\ \sin \left(\gamma \right) & \cos \left(\gamma \right) \end{pmatrix} \vec{p}_{\text{TX},1} + \vec{p}_{\mathrm{R},2} \tag{1}\\ \vec{p}_{\text{RX},n_{\mathrm{A}},2} &= \begin{pmatrix}\cos \left(\gamma \right) & -\sin \left(\gamma \right)\\ \sin \left(\gamma \right) & \cos \left(\gamma \right) \end{pmatrix} \vec{p}_{\text{RX},n_{\mathrm{A}},1} + \vec{p}_{\mathrm{R},2}. \tag{2} \end{align*}

View SourceRight-click on figure for MathML and additional features.As radar 2 moves with constant velocity, the position vector \vec{p}_{\mathrm{R},2} changes linearly over time and can be expressed as \begin{align*} \vec{p}_{\mathrm{R},2}(t) = \begin{pmatrix}x_{\mathrm{R},0}\\ y_{\mathrm{R},0} \end{pmatrix} + \vec{v}_{\mathrm{R}} \cdot t, \tag{3} \end{align*}
View SourceRight-click on figure for MathML and additional features.
where x_{\mathrm{R},0} and y_{\mathrm{R},0} are the coordinates of the initial position at the beginning of the SAR recording. The orientation \gamma is assumed to be constant during the SAR trajectory.

The model parameter descriptions are gathered in Table 1.

B. Signal Model

During the SAR integration time, multiple frames are transmitted and received by both radars simultaneously. Every frame consists of a chirp sequence with N_{\text{ch}} chirps. During every chirp sequence, an unknown constant frequency offset between both reference oscillators is assumed, which leads to a constant time drift between both local time bases. The local time for the n_{\mathrm{F}}-th frame is calculated as \begin{equation*} t_{i,n_{\mathrm{F}}}(t) = \left(1+\delta _{t,i,n_{\mathrm{F}}}\right)t + \Delta \tau _{0,i,n_{\mathrm{F}}}, \tag{4} \end{equation*}

View SourceRight-click on figure for MathML and additional features.where i\in \lbrace 1,2\rbrace is the unit index, \Delta \tau _{0,i,n_{\mathrm{F}}} is the initial time offset at frame start, and \delta _{t,i,n_{\mathrm{F}}}=\frac{-f_{\text{ref}}^{i,n_{\mathrm{F}}}-f_{\text{ref}}}{f_{\text{ref}}} is the time drift due to the difference between the reference frequency of the local oscillator f_{\text{ref}}^{i,n_{\mathrm{F}}} and its nominal frequency f_{\text{ref}}. This model can be further simplified by the assumption of a symmetric systematic error in time and frequency, which has negligible influence on the later processing results [30]. This leads to the expressions \Delta \tau _{0,1,n_{\mathrm{F}}} = \frac{\Delta \tau _{0,n_{\mathrm{F}}}}{2} = - \Delta \tau _{0,2,n_{\mathrm{F}}} and \delta _{t,1,n_{\mathrm{F}}} = \frac{\delta _{t,n_{\mathrm{F}}}}{2} = \delta _{t,2,n_{\mathrm{F}}} to describe both local time bases.

TABLE 1 Geometry Model Parameters
Table 1- 
            Geometry Model Parameters

In addition to these systematic errors in the bistatic beat signals, each locally generated TX signal suffers from random phase errors. This behaviour can be modelled chirp-wise by a zero-mean time-dependent phase noise part \phi _{\text{pn},k,n_{\mathrm{F}},1/2}(t) and a constant unknown start phase of each chirp \Theta _{k,n_{\mathrm{F}},1/2} in each radar node, where k indicates the chirp index.

Applying the systematic and stochastic error sources to a FMCW signal model, as was done comprehensively in [30], yields the following expressions for the bistatic beat signal phases from radar 2 to 1 \begin{align*} \Phi _{\mathrm{B},k,n_{\mathrm{F}}}^{2\to 1}&(t,\tau) \\ =&2\pi \left[-f_{0}\delta _{t,n_{\mathrm{F}}} - \mu \tau + \mu \left(\Delta \tau _{0,n_{\mathrm{F}}} + kT_{\text{rep}}\delta _{t,n_{\mathrm{F}}}\right) \right] t \\ &-2\pi \mu \delta _{t,n_{\mathrm{F}}}t^{2} - 2\pi f_{0}\tau + \phi _{\text{pn},k,n_{\mathrm{F}},2}(t-\tau) \\ \ &- \phi _{\text{pn},k,n_{\mathrm{F}},1}(t) + \underbrace{\Theta _{k,n_{\mathrm{F}},2} - \Theta _{k,n_{\mathrm{F}},1}}_{=\Delta \Theta _{k,n_{\mathrm{F}}}} \tag{5} \end{align*}

View SourceRight-click on figure for MathML and additional features.and from radar 1 to 2 \begin{align*} \Phi _{\mathrm{B},k,n_{\mathrm{F}}}^{1\to 2}&(t,\tau) \\ =&2\pi \left[f_{0}\delta _{t,n_{\mathrm{F}}} - \mu \tau - \mu \left(\Delta \tau _{0,n_{\mathrm{F}}} + kT_{\text{rep}}\delta _{t,n_{\mathrm{F}}}\right) \right] t \\ &-2\pi \mu \delta _{t,n_{\mathrm{F}}}t^{2} -2\pi f_{0}\tau + \phi _{\text{pn},k,n_{\mathrm{F}},1}(t-\tau) \\ &- \phi _{\text{pn},k,n_{\mathrm{F}},2}(t) - \Delta \Theta _{k,n_{\mathrm{F}}}, \tag{6} \end{align*}
View SourceRight-click on figure for MathML and additional features.
where f_{0} is the carrier frequency, \mu =\frac{B}{T_{\text{ch}}} is the chirp slope, T_{\text{rep}} is the chirp repetition time in one frame, and \tau is the time-of-flight (ToF) of a given channel. An overview of the bistatic radars signal parameters is given in Table 2. We divide the channel in a LoS component with \begin{align*} \tau _{\text{LoS},n_{\mathrm{A}}}^{2\to 1} &= \frac{\left\Vert \vec{p}_{\text{RX},n_{\mathrm{A}},1}-\vec{p}_{\text{TX},2}\right\Vert }{\mathrm{c}_{0}} \tag{7}\\ \tau _{\text{LoS},n_{\mathrm{A}}}^{1\to 2} &= \frac{\left\Vert \vec{p}_{\text{RX},n_{\mathrm{A}},2}-\vec{p}_{\text{TX},1}\right\Vert }{\mathrm{c}_{0}} \tag{8} \end{align*}
View SourceRight-click on figure for MathML and additional features.
and several non-line-of-sight (NLoS) components due to passive reflections from point targets in the environment \begin{align*} \tau _{\ell,n_{\mathrm{A}}}^{2\to 1} &= \frac{\left\Vert \vec{p}_{\mathrm{T},\ell }-\vec{p}_{\text{TX},2}\right\Vert + \left\Vert \vec{p}_{\text{RX},n_{\mathrm{A}},1} - \vec{p}_{\mathrm{T},\ell }\right\Vert }{\mathrm{c}_{0}} \tag{9}\\ \tau _{\ell,n_{\mathrm{A}}}^{1\to 2} &= \frac{\left\Vert \vec{p}_{\mathrm{T},\ell }-\vec{p}_{\text{TX},1}\right\Vert + \left\Vert \vec{p}_{\text{RX},n_{\mathrm{A}},2} - \vec{p}_{\mathrm{T},\ell }\right\Vert }{\mathrm{c}_{0}}, \tag{10} \end{align*}
View SourceRight-click on figure for MathML and additional features.
where \ell is the index of a specific point target and \mathrm{c}_{0} the speed of light in vacuum. Putting this together, we can derive a model for the bistatic beat signals: \begin{align*} &s_{\mathrm{B},k,n_{\mathrm{A}},n_{\mathrm{F}}}^{2\to 1} (t) = A_{\text{RX},LoS}^{2\to 1}\cdot \mathrm{e}^{\mathrm{j}\Phi _{\mathrm{B},k,n_{\mathrm{F}}}^{2\to 1}(t,\tau _{\text{LoS},n_{\mathrm{A}}}^{2\to 1})} \\ &\quad + \sum _{\ell } A_{\text{RX},\ell }^{2\to 1}\cdot \mathrm{e}^{\mathrm{j}\Phi _{\mathrm{B},k,n_{\mathrm{F}}}^{2\to 1}(t,\tau _{\ell,n_{\mathrm{A}}}^{2\to 1})} + n_{k,n_{\mathrm{A}},n_{\mathrm{F}},1}(t)\tag{11}\\ &s_{\mathrm{B},k,n_{\mathrm{A}},n_{\mathrm{F}}}^{1\to 2} (t) = A_{\text{RX},LoS}^{1\to 2}\cdot \mathrm{e}^{\mathrm{j}\Phi _{\mathrm{B},k,n_{\mathrm{F}}}^{1\to 2}(t,\tau _{\text{LoS},n_{\mathrm{A}}}^{1\to 2})} \\ &\quad + \sum _{\ell } A_{\text{RX},\ell }^{1\to 2}\cdot \mathrm{e}^{\mathrm{j}\Phi _{\mathrm{B},k,n_{\mathrm{F}}}^{1\to 2}(t,\tau _{\ell,n_{\mathrm{A}}}^{1\to 2})} + n_{k,n_{\mathrm{A}},n_{\mathrm{F}},2}(t), \tag{12} \end{align*}
View SourceRight-click on figure for MathML and additional features.
where n_{k,n_{\mathrm{A}},n_{\mathrm{F}},i}(t) is additive white Gaussian noise (AWGN), which is assumed to be uncorrelated between all RX channels.

TABLE 2 Radar Signal Model Parameters
Table 2- 
            Radar Signal Model Parameters

Furthermore, if the rate of change of the phase noise term \phi _{\text{pn},k,n_{\mathrm{F}},1/2} is considerably smaller than \frac{2\pi }{\tau }, the time shift due to the ToF can be neglected, as shown in [27]. The phase noise differences in both beat signal terms can then be simplified to \begin{align*} &\phi _{\text{pn},k,n_{\mathrm{F}},2}(t-\tau) - \phi _{\text{pn},k,n_{\mathrm{F}},1}(t) \\ &\approx \phi _{\text{pn},k,n_{\mathrm{F}},2}(t) - \phi _{\text{pn},k,n_{\mathrm{F}},2}(t) \\ &\approx -\left[\phi _{\text{pn},k,n_{\mathrm{F}},1}(t-\tau) - \phi _{\text{pn},k,n_{\mathrm{F}},2}(t)\right]. \tag{13} \end{align*}

View SourceRight-click on figure for MathML and additional features.In consequence, all error sources lead to phase disturbances with same magnitude and opposite sign in both stations' beat signals, while the phase terms containing the desired information about the ToF contribute with equal signs. This behaviour is used later to eliminate synchronization errors and to obtain accurate self-localization results.

The beat frequencies and peak phases that correspond to a given channel path with ToF \tau are given by \begin{align*} f_{\mathrm{B},k,n_{\mathrm{F}}}^{2\to 1} =& -f_{0}\delta _{t,n_{\mathrm{F}}} - \mu \tau + \mu (\Delta \tau _{0,n_{\mathrm{F}}} + kT_{\text{rep}}\delta _{t,n_{\mathrm{F}}}) \\ &+ \epsilon _{f,\text{pn},k,n_{\mathrm{F}}} + \epsilon _{f,\text{awgn},k,n_{\mathrm{F}},1}\tag{14}\\ f_{\mathrm{B},k,n_{\mathrm{F}}}^{1\to 2} =& f_{0}\delta _{t,n_{\mathrm{F}}} - \mu \tau - \mu (\Delta \tau _{0,n_{\mathrm{F}}} + kT_{\text{rep}}\delta _{t,n_{\mathrm{F}}}) \\ &- \epsilon _{f,\text{pn},k,n_{\mathrm{F}}} + \epsilon _{f,\text{awgn},k,n_{\mathrm{F}},2}, \tag{15} \end{align*}

View SourceRight-click on figure for MathML and additional features.and \begin{align*} \varphi _{\mathrm{B},k,n_{\mathrm{F}}}^{2\to 1} &= -2\pi f_{0} \tau + \Delta \Theta _{k,n_{\mathrm{F}}} + \epsilon _{\varphi,\text{pn},k,n_{\mathrm{F}}} + \epsilon _{\varphi,\text{awgn},k,n_{\mathrm{F}},1}\tag{16}\\ \varphi _{\mathrm{B},k,n_{\mathrm{F}}}^{1\to 2} &= -2\pi f_{0} \tau - \Delta \Theta _{k,n_{\mathrm{F}}} - \epsilon _{\varphi,\text{pn},k,n_{\mathrm{F}}} + \epsilon _{\varphi,\text{awgn},k,n_{\mathrm{F}},2}, \tag{17} \end{align*}
View SourceRight-click on figure for MathML and additional features.
respectively, where \epsilon _{f/\varphi,\text{pn}} is the correlated phase noise-induced frequency and phase error and \epsilon _{f/\varphi,\text{awgn},k,n_{\mathrm{F}},i} the uncorrelated error due to AWGN.

SECTION III.

Bistatic Sar Theory

Relevant to this paper are near-field bistatic geometries in the 2D plane, as shown in Fig. 2. Hence, this section introduces a comprehensive discussion of the metrics of those systems, i.e., the resolution comparison of monostatic and bistatic SAR images and the angle-dependent bistatic RCS. For simplicity, the investigated scenarios in this section are limited to a moving transmitter (TX) with constant velocity \vec{v}_{\text{TX}} and a static receiver (RX).

Figure 2. - 
          Geometry of a 2D bistatic SAR scene, where the transmitter (TX) is moved along a linear SAR trajectory 1.5 m in length with constant velocity, whereas the receiver (RX) position is fixed. Iso-range and iso-Doppler contours for a target at position number 3 are drawn in the image scene with the assumption that the TX is located at the SAR trajectories' center position. The system's bandwidth is 1.5 GHz, and the carrier wavelength is 3.94 mm.
Figure 2.

Geometry of a 2D bistatic SAR scene, where the transmitter (TX) is moved along a linear SAR trajectory 1.5 m in length with constant velocity, whereas the receiver (RX) position is fixed. Iso-range and iso-Doppler contours for a target at position number 3 are drawn in the image scene with the assumption that the TX is located at the SAR trajectories' center position. The system's bandwidth is 1.5 GHz, and the carrier wavelength is 3.94 mm.

A. Bistatic SAR Resolution

The resolution of an FMCW radar SAR system can be divided into range and Doppler resolution. As discussed in [6], the point spread function (PSF) of a point target in the imaging scene is increasingly spatially variant as the radar nodes approach the image scene (see Fig. 2). Cardillo derived a gradient-based approach to approximate the spatial-dependent range and cross-range resolution of a bistatic SAR system in [32]. Therefore, this method is introduced and used to quantify the resolution. The bistatic range covers the distance from the TX antenna to the target and from the target to the RX antenna: \begin{equation*} R_{\mathrm{b}}(\vec{p}_{\mathrm{T}}) = \left\Vert \vec{p}_{\mathrm{T}}-\vec{p}_{\text{TX}}\right\Vert + \left\Vert \vec{p}_{\text{RX}}-\vec{p}_{\mathrm{T}}\right\Vert, \tag{18} \end{equation*}

View SourceRight-click on figure for MathML and additional features.where \vec{p}_{\mathrm{T}} is the target position and \vec{p}_{\text{TX}} and \vec{p}_{\text{RX}} are the transmitter and receiver positions, respectively. In a bistatic radar constellation, the curves of constant bistatic ranges are ellipses, with the TX and RX positions being as the focal points. Similarly, the Doppler component can be derived from the relative motion of the TX antenna with respect to the target position, which is given by the scalar product of the velocity vector \vec{v}_{\text{TX}} and the normalized vector between TX and target position \vec{n}_{\mathrm{TX,T}}, yielding the Doppler frequency \begin{align*} f_{\mathrm{D}}(\vec{p}_{\mathrm{T}}) &= \frac{1}{\lambda _{0}}\left\langle \vec{v}_{\text{TX}},\vec{n}_{\mathrm{TX,T}}\right\rangle \\ \ &\ \text{with} \quad \vec{n}_{\mathrm{TX,T}} = \frac{\vec{p}_{\mathrm{T}}-\vec{p}_{\text{TX}}}{\left\Vert \vec{p}_{\mathrm{T}}-\vec{p}_{\text{TX}}\right\Vert },\tag{19} \end{align*}
View SourceRight-click on figure for MathML and additional features.
where \left\langle \cdot,\cdot \right\rangle indicates the scalar product and \lambda _{0} is the wavelength corresponding to a continuous wave (CW) signal with frequency f_{0}. The curves in the image scene of Fig. 2 show spots of constant bistatic range and Doppler frequency for target 3. Their gradients are given by \begin{align*} \vec{\nabla } R_{\mathrm{b}}= \left(\frac{\partial }{\partial x}R_{\mathrm{b}}, \frac{\partial }{\partial y}R_{\mathrm{b}}\right)^{\mathrm{T}},\tag{20}\\ \vec{\nabla } f_{\mathrm{D}} = \left(\frac{\partial }{\partial x}f_{\mathrm{D}}, \frac{\partial }{\partial y}f_{\mathrm{D}}\right)^{\mathrm{T}}, \tag{21} \end{align*}
View SourceRight-click on figure for MathML and additional features.
where (\cdot)^{\mathrm{T}} is the transpose operator. Their magnitudes |\vec{\nabla } R_{\mathrm{b}}| and |\vec{\nabla } f_{\mathrm{D}} | give the maximum rates of change for the bistatic range and the Doppler frequency, respectively, and their normalized versions \begin{equation*} \vec{n}_{\vec{\nabla }R_{\mathrm{b}}} = \frac{\vec{\nabla } R_{\mathrm{b}}}{\left|\vec{\nabla } R_{\mathrm{b}}\right|}\quad \text{and}\quad \vec{n}_{\vec{\nabla }f_{\mathrm{D}}} = \frac{\vec{\nabla } f_{\mathrm{D}}}{\left|\vec{\nabla } f_{\mathrm{D}} \right|} \tag{22} \end{equation*}
View SourceRight-click on figure for MathML and additional features.
are the directions of those change rates, which are sketched in Fig. 2 for target 3. Thus, the bistatic range resolution is calculated as: \begin{equation*} \vec{\delta }_{r} = 0.89\cdot \frac{1/B}{\left|\vec{\nabla } R_{\mathrm{b}}\right|}\cdot \vec{n}_{\vec{\nabla }R_{\mathrm{b}}}, \tag{23} \end{equation*}
View SourceRight-click on figure for MathML and additional features.
where B is the bandwidth of the transmitted chirp. The factor of 0.89 is due to the 3dB-bandwidth of the assumed rectangular window [33]. Similarly, the Doppler resolution is given by \begin{equation*} \vec{\delta }_{\mathrm{D}} = 0.89\cdot \frac{1/T_{\text{int}}}{\left|\vec{\nabla } f_{\mathrm{D}} \right|}\cdot \vec{n}_{\vec{\nabla }f_{\mathrm{D}}}, \tag{24} \end{equation*}
View SourceRight-click on figure for MathML and additional features.
where T_{\text{int}} is the integration time along the SAR trajectory. The derivation of (23) and (24) is explained in more detail in [32]. In contrast to the monostatic case, where range and Doppler gradients are orthogonal, the angle \Theta between both gradients in a bistatic scenario is smaller than \pi /2. Typically, the resolution of the PSF is defined in the range and cross-range directions [32], [34]. For the systems considered in this work, the Doppler resolution is much smaller than the range resolution, which is shown in the back-projection image of Fig. 2. Therefore, the segmentation into the Doppler resolution \delta _{\mathrm{D}} and the cross-Doppler resolution \delta _{\text{xD}} is preferred here. The cross-Doppler and the range resolution are coupled by the gradient angle \Theta due to the following equation: \begin{equation*} \delta _{\text{xD}} = \frac{\delta _{\mathrm{r}}}{\sin \Theta }. \tag{25} \end{equation*}
View SourceRight-click on figure for MathML and additional features.
When the target approaches the synthetic aperture, the change in aspect angle between the TX antenna and the target along the SAR trajectory increases. This situation is shown in Fig. 2 for target 3, where \Psi _{\mathrm{r}} indicates the range of aspect angels along the SAR trajectory. This perspective variation contributes to the range resolution of the system and must be considered in the near-field region. In [28], this contribution to the cross-Doppler resolution is given by \begin{equation*} \delta _{\mathrm{cD,nf}} = {0.89}\frac{7\lambda _{0}}{\Psi _{\mathrm{r}}^{2}}. \tag{26} \end{equation*}
View SourceRight-click on figure for MathML and additional features.
To evaluate this near-field influence, a simulation with the bistatic SAR constellation from Fig. 2 and varying target positions was conducted. Two position sweeps of the target were simulated: one along the x axis and the other along the y axis. Fig. 3 shows that for targets in close proximity to the trajectory, (26) determines the cross-Doppler resolution, whereas for increasing distances, the gradient-based formula (25) is dominant. Futhermore, the bistatic cross-Doppler resolution is worse than the monostatic one for bistatic SAR constellations, to which we are restricted due to the required LoS conditions, as these have a comparatively large bistatic angle. However, incorporating the bistatic measurements will be beneficial due to the changed RCS characteristics observed in multistatic measurements.

Figure 3. - 
            Comparison of cross-Doppler resolution between the analytical formulas of equations (25) and (26) and the numerical results from a back-projection evaluation for point targets swept along the x (a) and y (b) axis referring to the scene of Fig. 2. Also monostatic cross-Doppler resolution derived from the back-projection images is plotted.
Figure 3.

Comparison of cross-Doppler resolution between the analytical formulas of equations (25) and (26) and the numerical results from a back-projection evaluation for point targets swept along the x (a) and y (b) axis referring to the scene of Fig. 2. Also monostatic cross-Doppler resolution derived from the back-projection images is plotted.

B. Impact of Self-Localization Errors

To assess the required self-localization accuracy of the radar-based positioning algorithm for the BiSAR processing, the influence of erroneous trajectory estimation on the imaging performance needs to be investigated. For automotive SAR scenarios, the effects of constant position and velocity estimation errors on the SAR imaging performace are discussed in [19], [35]. While moderate positional offsets have a negligible influence on the processing results, errors in the estimated velocity lead to a displacement of the estimated target position in the direction of the Doppler gradient \vec{n}_{\vec{\nabla }f_{\mathrm{D}}} and a slight target power loss due to defocusing. Both works assume far-field condition, which are not present in the scenarios discussed in this paper. Hence, the influence of velocity estimation errors on the imaging quality of near-field SAR heavily depends on the constellation and cannot be easily addressed by a closed-form analytical solution. To quantify the effect of erroneous velocities for scenarios within the scope of this work, the constellation given in Fig. 2 with a target at position 1 and at position 5 is simulated for a range of velocity errors. These errors are divided into a component along the trajectory \varepsilon _{v,\text{traj}} and a component perpendicular to the linear trajectory \varepsilon _{v,\text{xtraj}}. Normalizing to the nominal velocity magnitude |\vec{v} | yields \delta _{v,\text{traj}} = \varepsilon _{v,\text{traj}}/|\vec{v} | and \delta _{v,\text{xtraj}} = \varepsilon _{v,\text{xtraj}}/|\vec{v} |, providing better comparability.

The coherent integration loss, which is the relationship between the actual targets peak power and the peak power of a perfectly focused target, is used as a metric to assess SAR image quality. The result of this simulation is shown in Fig. 4. It indicates that a velocity error component along the trajectory significantly affects defocusing compared to an error perpendicular to the trajectory direction. Additionally, the degree of defocusing strongly depends on the target's position. These findings provide an indication of the required precision of the proposed particle filter's tracking results in the measurement evaluation section. While a higher error perpendicular to the trajectory is permissible, the integration loss due to a velocity error in the trajectory direction decreases significantly faster, especially for target 5 with a larger bistatic angle.

Figure 4. - 
            Influence of the velocity estimation error on the scenario shown in Fig. 2. The metric of the coherent integration loss is calculated for targets 1 and 5.
Figure 4.

Influence of the velocity estimation error on the scenario shown in Fig. 2. The metric of the coherent integration loss is calculated for targets 1 and 5.

C. Bistatic RCS

To complete the brief theoretical discussion on near-field bistatic SAR, emphasis will be placed on the differences in scattering characteristics of complex targets in monostatic and bistatic SAR measurements, which are described by their RCS values.

The bistatic RCS is a measure for the scattered energy of an illuminated target into the direction of a receiver that is spatially separated from the transmitter's position and can be calculated by \begin{equation*} \sigma = \lim _{r\to \infty } 4\pi r^{2} \frac{\left|E_{\mathrm{s}}\right|}{\left|E_{\mathrm{i}}\right|}, \tag{27} \end{equation*}

View SourceRight-click on figure for MathML and additional features.where r is the distance between the observer and the object, |E_{\mathrm{i}}| and |E_{\mathrm{s}}| are the incident and the scattered electric field, respectively [36]. The bistatic RCS depends on the aspect angle under which the target is illuminated by the TX antenna, as well as on the bistatic angle between TX, target, and RX. Willis distinguishes between a pseudo-monostatic RCS region, a bistatic RCS region, and a forward-scatter RCS region [37]. The first region is characterized by small bistatic angles, to which the monostatic-bistatic equivalence theorem (MBET) can be applied, as given in [36]. The MBET implies that the bistatic RCS can be derived from the monostatic RCS of an object, where the aspect angle is defined by the bistatic bisector. Comparisons of measurements with MBET results in [38] show, that MBET predicts well for objects with simple geometry and dominant specular reflections up to a bistatic angle of 30°, but the applicable range decreases to 10° for complex objects with equally specular and non-specular reflection parts. Therefore, this region is not suited to describe the scenarios presented in this work. When the MBET approach fails, the bistatic RCS region starts, and for bistatic angles approaching 180°, forward-scattering becomes dominant. For those two regions, an analytical description of the bistatic RCS is not available. Alternatively, a ray-tracing simulation framework based on the shooting and bouncing ray (SBR) technique, together with a physical optics (PO) approach and physical theory of diffraction (PTD), can be utilized to determine the bistatic RCS of complex objects [39]. In this work, the asymptotic solver of CST Studio Suite 2024 was used to simulate the reflection characteristics of a bicycle illuminated by a field source under an aspect angle of 45°, with respect to the bike's roll axis. The scattered rays were observed for different RX directions on a semicircle around the target object. The simulated results are shown in Fig. 5. In this scenario, the bicycle frame merely contributes to the monostatic reflection but is highly contributive in the bistatic region, which can be utilized in bistatic SAR imaging.

Figure 5. - 
            Bistatic RCS simulation of a bicycle model using CST Studio Suit 2024 ray-tracing tool. The targets was excited from an azimuth angle of 45°, and the corresponding bistatic RCS was observed over a range from −90° to 90° (b). The observed rays for the monostatic case (45°) (a), and for a bistatic obervation angle of −45° (c) are exemplarily shown.
Figure 5.

Bistatic RCS simulation of a bicycle model using CST Studio Suit 2024 ray-tracing tool. The targets was excited from an azimuth angle of 45°, and the corresponding bistatic RCS was observed over a range from −90° to 90° (b). The observed rays for the monostatic case (45°) (a), and for a bistatic obervation angle of −45° (c) are exemplarily shown.

In addition to this simulation, several other publications support the information gain of bistatic radar imaging compared to only monostatic observation. In [7], it is shown that geometries with weak monostatic scattering behavior often yield large bistatic returns. A practical use case is given in [40], where improvements in the detection of drones with multistatic radar networks are investigated.

SECTION IV.

Particle Filter-Based Localization

An important step for SAR processing is to gain precise estimates of the radar's positions over the whole SAR trajectory. The localization of static constellations using single FMCW chirp sequence frames was exhaustively discussed in our previous work [27]. Now, the aim is to derive an PF-based algorithm capable of continuously estimating the radars' pose and velocity over multiple chirp-sequence frames in a non-static network with an relative precision that enables SAR processing over long apertures. We start with the theory of the PF algorithm, followed by the derivation of the state model and the measurement functions of the bistatic radars system. This section is concluded by a verification of the proposed PF algorithm using radar measurements in conjunction with an optical reference system supplying ground truth positions.

A. Theory

The basic equations behind the PF originate in the Bayes filter algorithm. This algorithm generally has two quantities: the observations \vec{z}_{D_{i}}, containing the measurement along a sub-aperture D_{i}, and the system's current state \vec{x}_{D_{i}}, which consists in this case of the relative pose of the moving radar at the start of the sub-trajectory D_{i} and its vectorial velocity. Both quantities are assumed to be time-dependent random variables, each described by a stochastic process. The sub-aperture D_{i} indicates that for each filter iteration, N_{\text{sub}} chirp measurements recorded during the movement are processed coherently instead of using each radar measurement on its own, as was done in our previous work [29].

The conditional probability density function (PDF) of the state vector, in [41] referred to as belief \begin{equation*} \text{bel}\left(\vec{x}_{D_{i}}\right) = p\left(\vec{x}_{D_{i}}\vert \vec{z}_{D_{0}:D_{i}}\right), \tag{28} \end{equation*}

View SourceRight-click on figure for MathML and additional features.represents our knowledge of the current state given all previous measurements. The system's dynamics are described by another PDF p(\vec{x}_{D_{i}}\vert \vec{x}_{D_{i-1}}), which is the probability of the current state \vec{x}_{D_{i}} given the last state \vec{x}_{D_{i-1}}. Using the Chapman-Kolmogorov equation and assuming the stochastic process that underlies the state vector is Markovian, a prediction of the state evolution can be made [42]: \begin{equation*} \overline{\text{bel}}\left(\vec{x}_{D_{i}}\right) = \int p\left(\vec{x}_{D_{i}}\vert \vec{x}_{D_{i-1}}\right) \text{bel}\left(\vec{x}_{D_{i-1}}\right) \mathrm{d}\vec{x}_{D_{i-1}}. \tag{29} \end{equation*}
View SourceRight-click on figure for MathML and additional features.
Now, Bayes' rule comes into play, as we cannot directly infer the current belief \text{bel}(\vec{x}_{D_{i}}) from the measurement vector \vec{z}_{D_{i}} but the probability of the received measurement under the assumption of the current state p(\vec{z}_{D_{i}}\vert \vec{x}_{D_{i}}), which is the measurement model. Utilizing Bayes' rule leads to the desired \mathit {belief} \begin{equation*} \text{bel}\left(\vec{x}_{D_{i}}\right) = \eta \cdot p\left(\vec{z}_{D_{i}}\vert \vec{x}_{D_{i}}\right) \cdot \overline{\text{bel}}\left(\vec{x}_{D_{i}}\right), \tag{30} \end{equation*}
View SourceRight-click on figure for MathML and additional features.
where \eta is a normalization constant for \text{bel}(\vec{x}_{D_{i}}) to meet the requirements of a PDF but does not influence its shape [41].

This basic algorithm is fundamental to a variety of stochastic filters that differ mainly in their way of representing the densities \overline{\text{bel}}(\vec{x}_{D_{i}}) and \text{bel}(\vec{x}_{D_{i}}). Probably the most prominent implementation of Bayes' rule is the Kalman filter and its derivates, which assume Gaussian PDFs. Therefore, only two parameters are required to represent the density functions, which makes them very efficient. If the models for the state dynamics or the observations tend to differ too strongly from a Gaussian representation, alternatives to the Kalman filter are required. Especially for the use case of phase-based localization, the measurement functions tend to become multimodal due to the 2\pi ambiguity of phase measurements [31]. Therefore, the PF has proven to be very performant in dealing with multimodal PDFs, as it represents the belief by a set of weighted particles \begin{equation*} \mathcal {X}_{D_{i}} = \left\lbrace \vec{x}_{D_{i}}^{\;n_{\mathrm{P}}}, w_{D_{i}}^{n_{\mathrm{P}}}\right\rbrace _{n_{\mathrm{P}}=1}^{N_{\mathrm{P}}}, \tag{31} \end{equation*}

View SourceRight-click on figure for MathML and additional features.where N_{\mathrm{P}} is the number of particles used and w_{D_{i}}^{n_{\mathrm{P}}} is the weight of the n_{\mathrm{P}}-th particle \vec{x}_{D_{i}}^{\;n_{\mathrm{P}}}. Those weights can generally be calculated by \begin{equation*} w_{D_{i}}^{n_{\mathrm{P}}} \propto w_{D_{i-1}}^{n_{\mathrm{P}}}\cdot \frac{p\left(\vec{z}_{D_{i}}\vert \vec{x}_{D_{i}}^{\;n_{\mathrm{P}}}\right)p\left(\vec{x}_{D_{i}}^{\;n_{\mathrm{P}}}\vert \vec{x}_{D_{i-1}}^{\;n_{\mathrm{P}}}\right)}{q\left(\vec{x}_{D_{i}}^{\;n_{\mathrm{P}}}\vert \vec{x}_{D_{i-1}}^{\;n_{\mathrm{P}}};\vec{z_{D_{i}}}\right)}, \tag{32} \end{equation*}
View SourceRight-click on figure for MathML and additional features.
where q(\cdot) is the importance density from which the state samples \vec{x}_{D_{i}}^{\;n_{\mathrm{P}}} are drawn [43]. One possible choice for the importance density is the prior PDF p(\vec{x}_{D_{i}}\vert \vec{x}_{D_{i-1}}) described by the state transition model. This choice eases the calculation of the current weights to \begin{equation*} w_{D_{i}}^{n_{\mathrm{P}}} \propto w_{D_{i-1}}^{n_{\mathrm{P}}}\cdot p\left(\vec{z}_{D_{i}}\vert \vec{x}_{D_{i}}^{\;n_{\mathrm{P}}}\right). \tag{33} \end{equation*}
View SourceRight-click on figure for MathML and additional features.
This bootstrap PF algorithm iteratively predicts a new set of particles by applying the system's dynamic model to the previous particle set and incorporates the measurements by calculating new weights for each particle. The belief at the i-th iteration is then approximated by \begin{equation*} \text{bel}(\vec{x}_{D_{i}}) \approx \sum _{n_{\mathrm{P}}=1}^{N_{\mathrm{P}}} w_{D_{i}}^{n_{\mathrm{P}}} \delta \left(\vec{x}_{D_{i}} - \vec{x}_{D_{i}}^{\;n_{\mathrm{P}}}\right), \tag{34} \end{equation*}
View SourceRight-click on figure for MathML and additional features.
where \delta (\cdot) is the Dirac delta measure. An issue that can arise in this basic sequential importance sampling algorithm is the so called degeneracy problem [43]. This describes the state in which only a few particles are of a significant weight, while the majority weight near zero. Therefore, a good indicator for degeneracy detection is the variance of the particle weights. When variance is small, the particle set has degenerated. A metric related to the weighting variance is the effective number of particles N_{\mathrm{P,eff},D_{i}}. In [44], an approximation of this effective particle count is given by: \begin{equation*} N_{\mathrm{P,eff,}D_{i}} \approx \frac{1}{\sum _{n_{\mathrm{P}}=1}^{N_{\mathrm{P}}}\left(w_{D_{i}}^{n_{\mathrm{P}}}\right)^{2}}. \tag{35} \end{equation*}
View SourceRight-click on figure for MathML and additional features.
Thus, a way to circumvent the degeneration problem is to introduce a resampling step when N_{\mathrm{P,eff}} falls below a given threshold N_{\text{res}}.

B. Implementation

First, the state vector is defined as \begin{equation*} \vec{x}_{D_{i}} = \begin{pmatrix}x_{\mathrm{R},D_{i}} & y_{\mathrm{R},D_{i}} & \dot{x}_{\mathrm{R},D_{i}} & \dot{y}_{\mathrm{R},D_{i}} & \gamma _{D_{i}} \end{pmatrix}^{\mathrm{T}}. \tag{36} \end{equation*}

View SourceRight-click on figure for MathML and additional features.The position, velocity, and tilt angle quantities were introduced in Section II and are illustrated in Fig. 6. At the initialization stage, little information about the initial state is available. A first estimate can be received by processing the first chirp sequence with the localization algorithm introduced in [27]. Hence, the initial particle set can be drawn from a uniform distribution centered around this estimate. Every weight of the initial particle set is assigned the value \frac{1}{N_{\mathrm{P}}}.

Figure 6. - 
            Bistatic radar constellation with two-channel SIMO radars, where three sub-apertures along the radar trajectory are sketched. In this example, four measurements per sub-aperture are taken and are processed during one filter iteration. The different LoS path distances are drawn.
Figure 6.

Bistatic radar constellation with two-channel SIMO radars, where three sub-apertures along the radar trajectory are sketched. In this example, four measurements per sub-aperture are taken and are processed during one filter iteration. The different LoS path distances are drawn.

To predict the particles of the i-th sub-aperture, a model with constant velocity and a constant tilt angle is applied. Therefore, a particle from the i-1-th iteration is transferred to the current iteration by applying the linear equation \begin{equation*} \vec{x}_{D_{i}}^{\;n_{\mathrm{P}}} = \underbrace{\begin{pmatrix}1 & 0 & \Delta T & 0 & 0 \\ 0 & 1 & 0 & \Delta T & 0 \\ 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 1 \end{pmatrix}}_{\mathbf {F}\left[\Delta T\right]} \vec{x}_{D_{i-1}}^{\;n_{\mathrm{P}}} + \vec{w}_{D_{i}}, \tag{37} \end{equation*}

View SourceRight-click on figure for MathML and additional features.where \Delta T is the time difference between successive iterations and \vec{w}_{D_{i}}\sim {\mathcal {N}}(0,\mathbf {P_{D_{i}}}) AWGN with zero mean and covariance matrix \mathbf {P_{D_{i}}}. In [45], the state variances are weighted by the quotient of the particle count N_{\mathrm{P}} and the effective number of particles N_{\mathrm{P,eff,}D_{i}} to prevent particle degeneration. This weighting is adopted here, and the covariance matrix is calculated as \begin{align*} &\mathbf {P_{\mathrm{D_{i}}}} = \vec{\alpha }_{\mathrm{P}}\cdot \frac{\cdot N_{\mathrm{P}}}{N_{\mathrm{P,eff,}D_{i}}} \\ & \cdot \text{diag}\begin{pmatrix}\sigma _{x_{\mathrm{R}},D_{i}}^{2} & \sigma _{y_{\mathrm{R}},D_{i}}^{2} & \sigma _{\dot{x}_{\mathrm{R}},D_{i}}^{2} & \sigma _{\dot{y}_{\mathrm{R}},D_{i}}^{2} & \sigma _{\gamma,,D_{i}}^{2} \end{pmatrix}, \tag{38} \end{align*}
View SourceRight-click on figure for MathML and additional features.
where \sigma _{\cdot,D_{i}}^{2} is the variance of the particles in every state dimension before the prediction step of the sub-trajectory D_{i} and \vec{\alpha }_{P} is a filter parameter that determines the particle spread of every state entry after each iteration. After the prediction, the current measurements are processed to calculated the particle weights. From the signal model of Section II, it is known that N_{\mathrm{F}} frames and N_{\text{ch}} chirps per frame are recorded along the SAR trajectory, yielding N_{\mathrm{F}}\cdot N_{\text{ch}} measurements in total. This trajectory is divided into sup-apertures, each consisting of N_{\text{sub}} chirp measurements. It follows that the total trajectory is divided into N_{\text{it}} = \lfloor \frac{N_{\mathrm{F}}\cdot N_{\text{ch}}}{N_{\text{sub}}} \rfloor sub-trajectories. Per measurement and radar unit, N_{\text{RX}} beat frequencies and phases of the direct path are received, which can be expressed as functions of the LoS path length according to (14), (15), (16), and (17). To eliminate the synchronization error terms, the beat frequencies and beat phases of the n_{\text{RX}}-th channel of both radars are added, which yields the round-trip frequency \begin{equation*} \tilde{f}_{n_{\mathrm{A}},D_{i},n_{\text{sub}}}^{\text{RT}} = -\mu \left(\tau _{n_{\mathrm{A}},D_{i},n_{\text{sub}}}^{2\to 1} + \tau _{n_{\mathrm{A}},D_{i},n_{\text{sub}}}^{1\to 2}\right) \tag{39} \end{equation*}
View SourceRight-click on figure for MathML and additional features.
and the round-trip phase \begin{equation*} \tilde{\varphi }_{n_{\mathrm{A}},D_{i},n_{\text{sub}}}^{\text{RT}} = -2\pi f_{0} \left(\tau _{n_{\mathrm{A}},D_{i},n_{\text{sub}}}^{2\to 1} + \tau _{n_{\mathrm{A}},D_{i},n_{\text{sub}}}^{1\to 2}\right), \tag{40} \end{equation*}
View SourceRight-click on figure for MathML and additional features.
where \tau _{n_{\mathrm{A}},D_{i},n_{\text{sub}}}^{i\to j} is the ToF of the direct path corresponding to the path length d_{n_{\mathrm{A}},D_{i},n_{\text{sub}}}^{i\to j}, which is sketched in Fig. 6. A tilde \tilde{\cdot } indicates values that are measured or derived from measurements. Additionally, the absolute phases of each RX channel per radar are processed, as they contain information about the angle-of-arrival (AoA) of the LoS signal, which is otherwise lost by the summation that leads to the round-trip phases.

The round-trip distances can be calculated from the measured frequencies using (39) by \begin{equation*} \tilde{d}_{n_{\mathrm{A}},D_{i},n_{\text{sub}}}^{\text{RT}} = -\mathrm{c}_{0}\frac{\tilde{f}_{n_{\mathrm{A}},D_{i},n_{\text{sub}}}^{\text{RT}}}{\mu }. \tag{41} \end{equation*}

View SourceRight-click on figure for MathML and additional features.or derived as a function of the current state due to the geometric relations of (1) and (2). Therefore, hypothetical direct path and round-trip distances for all RX channels n_{\mathrm{A}} and sub-indices n_{\text{sub}} within a sub-trajectory can be calculated for every particle \vec{x}_{D_{i}}^{\;n_{\mathrm{P}}} and correlated with the measured frequencies and phases using the measurement function p(\vec{z}_{D_{i}}\vert \vec{x}_{D_{i}}). This function is divided into individual measurement functions. The first measurement function can account for the round-trip frequencies as \begin{align*} &p\left(\vec{f}_{D_{i}}^{\;\text{RT}}\vert \vec{x}_{D_{i}}^{\;n_{\mathrm{P}}}\right) = \sum _{n_{\text{sub}}=0}^{N_{\text{sub}}-1} \sum _{n_{\mathrm{A}}=0}^{N_{\text{RX}}-1} \\ &\quad \frac{1}{\sqrt{2\pi \sigma _{d_{\text{rt}}}^{2}}}\exp \left\lbrace -\frac{\left(d_{n_{\mathrm{A}},n_{\text{sub}}}^{\text{RT}}\left(\vec{x}_{D_{i}}^{\;n_{\mathrm{P}}}\right)-\tilde{d}_{n_{\mathrm{A}},D_{i},n_{\text{sub}}}^{\text{RT}}\right)^{2}}{2\sigma _{d_{\text{rt}}}^{2}}\right\rbrace, \tag{42} \end{align*}
View SourceRight-click on figure for MathML and additional features.
where the variance \sigma _{d_{\text{rt}}}^{2} is a filter parameter adjusting the trust in the frequency measurements. The second function includes the round-trip phase measurements, applying holographic processing: \begin{align*} &p\left(\vec{\varphi }_{D_{i}}^{\;\text{RT}}\vert \vec{x}_{D_{i}}^{\;n_{\mathrm{P}}}\right) =\bigg \vert \sum _{n_{\text{sub}}=0}^{N_{\text{sub}}-1} \sum _{n_{\mathrm{A}}=0}^{N_{\text{RX}}-1} \\ &\quad \exp \left\lbrace \mathrm{j}\left[k d_{n_{\mathrm{A}},n_{\text{sub}}}^{\text{RT}}\left(\vec{x}_{D_{i}}^{\;n_{\mathrm{P}}}\right) + \tilde{\varphi }_{n_{\mathrm{A}},D_{i},n_{\text{sub}}}^{\text{RT}}\right]\right\rbrace \bigg \vert, \tag{43} \end{align*}
View SourceRight-click on figure for MathML and additional features.
where k=\frac{2\pi }{\lambda _{0}} is the wave number. The contribution of the absolute phases is evaluated in a third function using a classical beamforming approach [46] for each receiving radar and each subaperture index: \begin{align*} &p\left(\vec{\varphi }_{D_{i}}^{\;\mathrm{2\to 1}}, \vec{\varphi }_{D_{i}}^{\;\mathrm{1\to 2}}\vert \vec{x}_{D_{i}}^{\;n_{\mathrm{P}}}\right) = \sum _{n_{\text{sub}}=0}^{N_{\text{sub}}-1} \\ &\ \left|\sum _{n_{\mathrm{A}}=0}^{N_{\text{RX}}-1} \exp \left\lbrace \mathrm{j}\left[k d_{n_{\mathrm{A}},n_{\text{sub}}}^{\mathrm{2\to 1}}\left(\vec{x}_{D_{i}}^{\;n_{\mathrm{P}}}\right) + \tilde{\varphi }_{n_{\mathrm{A}},D_{i},n_{\text{sub}}}^{\mathrm{2\to 1}}\right]\right\rbrace \right| \\ &\ \cdot \left|\sum _{n_{\mathrm{A}}=0}^{N_{\text{RX}}-1} \exp \left\lbrace \mathrm{j}\left[k d_{n_{\mathrm{A}},n_{\text{sub}}}^{\mathrm{1\to 2}}\left(\vec{x}_{D_{i}}^{\;n_{\mathrm{P}}}\right) + \tilde{\varphi }_{n_{\mathrm{A}},D_{i},n_{\text{sub}}}^{\mathrm{1\to 2}}\right]\right\rbrace \right|. \tag{44} \end{align*}
View SourceRight-click on figure for MathML and additional features.
The magnitudes of both beamformers are multiplied and incoherently summed over the sub-trajectory. Coherent processing in this step is not possible due to unknown phase offsets in both radars caused by a lack of phase synchronization.

The combined measurement function is given by \begin{align*} p\left(\vec{z}_{D_{i}}\vert \vec{x}_{D_{i}}^{\;n_{\mathrm{P}}}\right) =& \eta \cdot p\left(\vec{f}_{D_{i}}^{\;\text{RT}}\vert \vec{x}_{D_{i}}^{\;n_{\mathrm{P}}}\right) \cdot p\left(\vec{\varphi }_{D_{i}}^{\;\text{RT}}\vert \vec{x}_{D_{i}}^{\;n_{\mathrm{P}}}\right)^{2}\\ & \cdot p\left(\vec{\varphi }_{D_{i}}^{\;\mathrm{2\to 1}}, \vec{\varphi }_{D_{i}}^{\;\mathrm{1\to 2}}\vert \vec{x}_{D_{i}}^{\;n_{\mathrm{P}}}\right)^{2}, \tag{45} \end{align*}

View SourceRight-click on figure for MathML and additional features.where \eta is the normalization factor to meet the requirements of a true pdf. The the exponent of the phase contributions serves to weight the phase measurements, increasing their influence on the combined measurement function. Hence, it can be considered another filter parameter, which was experimentally fitted to a value of 2. The frequency measurement function contains information mainly about the unambiguous distance between both radars. The phase evaluation adds information about the orientation of radar 2 and its aspect angle from radar 1. In contrast to [29], in which only one phase measurement over the RX channels is processed per iteration, the evaluation of multiple round-trip phase measurements along one sub-trajectory D_{i} gives precise displacement information and improves the measurement accuracy of the relative velocity's radial component. Two-dimensional slices of the five-dimensional simulated measurement function at its peak for a specific scenario are shown in Fig. 7. The prime superscript indicates that the applied normalization does not sum up to unity, as required for a PDF.

Figure 7. - 
            Maximum cuts of the unnormalized five-dimensional measurement function given in equation (45) showing slices of position $xy$ (a), velocity $\dot{x}\dot{y}$ (b), and tilt angle dimension $\gamma$ (c+d). In the simulated scenario, the actual state was given by $\vec{x}= (-0.68\,\text{m} \quad {4.63}\,\text{m} \quad {0.15}\,\text{m/s} {-0.26}\,\text{m/s} \quad {168.1}^{\circ })^{\mathrm{T}}$, which corresponds to the start position of the scenario in Fig. 2 in the static radars coordinate system.
Figure 7.

Maximum cuts of the unnormalized five-dimensional measurement function given in equation (45) showing slices of position xy (a), velocity \dot{x}\dot{y} (b), and tilt angle dimension \gamma (c+d). In the simulated scenario, the actual state was given by \vec{x}= (-0.68\,\text{m} \quad {4.63}\,\text{m} \quad {0.15}\,\text{m/s} {-0.26}\,\text{m/s} \quad {168.1}^{\circ })^{\mathrm{T}}, which corresponds to the start position of the scenario in Fig. 2 in the static radars coordinate system.

The general bootstrap PF algorithm can be implemented with the above derived set of equations. As previously mentioned, the first frame is processed by the localization scheme from [27]. This gives a coarse estimate of the state, from which the initial particle set is drawn. Subsequently, the state model of (37) is applied to all particles to predict the next state. This step is followed by a recalculation of the weights using (33) and (45).

To further decrease sample impoverishment, the effective number of particles is compared to a threshold in every filter iteration, and a resampling step is conducted if it falls below this threshold. A systematic resampling approach is used, as it offers high computational efficiency and a low sampling variance [41]. The threshold is set to N_{\mathrm{P,eff},D_{i}} \leq \frac{N_{\mathrm{P}}}{2}.

The state estimate at the i-th iteration can be received by calculating the expected value of the weighted particle set \begin{equation*} \hat{\vec{x}}_{D_{i}} = {\mathbb {E}}\left[\mathcal {X}_{D_{i}}\right] = \frac{\sum _{n_{\mathrm{P}}} \vec{x}_{D_{i}}^{\;n_{\mathrm{P}}}\cdot w_{D_{i}}^{n_{\mathrm{P}}}}{\sum _{n_{\mathrm{p}}} w_{D_{i}}^{n_{\mathrm{P}}}}. \tag{46} \end{equation*}

View SourceRight-click on figure for MathML and additional features.

C. Localization Results

The described PF algorithm with the filter parameters shown in Table 3 is validated using measurements taken in the laboratory based on the simulation scenario in Fig. 2. The coordinate system used for localization corresponds to the one spanned by radar node 1 as described in the geometry part of Section II. The ground truth information of the filter state is provided by an optical reference system with a 3D root mean square positioning error of 1.0 mm. Therefore, each radar node is equipped with optical markers to obtain its position and orientation. Radar 2 is mounted on a linear actuator to generate SAR trajectories with constant velocities. In this setup, five different measurements were taken with a metal rod at different target positions indicated in Fig. 8. The applied radar parameters are given in Table 4. The filter was evaluated by 16 runs per dataset to show that the PF converges to a global best estimate. The filter estimates of dataset number 1 are plotted in Fig. 9, in which the spread between single filter runs is shown by the 95% confidence interval.

TABLE 3 Filter Parameters
Table 3- 
            Filter Parameters
TABLE 4 Radar Parameters
Table 4- 
            Radar Parameters
Figure 8. - 
            Indoor scenario for bistatic SAR measurements with PF-based trajectory estimation. The lab is equipped with an optical tracking system to provide ground truth data. A messing rod serves as a target and was deployed at different positions (1-5), which are marked by green dots.
Figure 8.

Indoor scenario for bistatic SAR measurements with PF-based trajectory estimation. The lab is equipped with an optical tracking system to provide ground truth data. A messing rod serves as a target and was deployed at different positions (1-5), which are marked by green dots.

Figure 9. - 
            Evaluation of 16 filter runs. The transparent area around the respective mean values shows the 95% confidence interval. The dashed lines show the ground truth data received by the optical reference system.
Figure 9.

Evaluation of 16 filter runs. The transparent area around the respective mean values shows the 95% confidence interval. The dashed lines show the ground truth data received by the optical reference system.

To assess the filter's localization performance, the root mean square errors (RMSE) for position, orientation, and relative velocity offset in the trajectory and cross-trajectory directions are calculated over all time samples and data sets. The recorded radar raw data is also processed using the chirp sequence-based approach from our previous work in [27] to demonstrate the achieved improvement. Additionally, the particle filter position estimates over time are fitted to a linear trajectory model, which further enhances self-localization given a linear movement with constant velocity. The results are shown in Table 5. A significant improvement in positioning accuracy compared to chirp sequence-based processing is achieved by the proposed algorithm. By evaluating the velocity estimates using the results given in Fig. 4 of Section III-B, an acceptable integration loss of less then 3dB in the SAR images can be expected, especially for the linearly fitted estimates.

TABLE 5 Evaluation of the Self-Localization Results and Comparison to [27]
Table 5- 
            Evaluation of the Self-Localization Results and Comparison to [27]
SECTION V.

Sar Imaging Using Bistatic Self-Localization

After successfully verifying the performance of the self-localization algorithm, the SAR capabilities of the system will be demonstrated. For this purpose,a linearly fitted model based on our particle filter state estimates, denoted as \vec{x}_{\text{lin}}(t), serves as the starting point of the SAR processing. The TX and RX antenna positions of the mobile radar along the synthetic aperture at time t follow directly from the linear fitted state at this time, according to (1), (2), and (3). This leads to the estimates of the mobile radars antenna positions for every frame n_{\mathrm{F}} and every chirp k, denoted as \hat{\vec{p}}_{\text{TX},2,k,n_{\mathrm{F}}} and \hat{\vec{p}}_{\text{RX},2,k,n_{\mathrm{A}},n_{\mathrm{F}}}.

A. Bistatic Synchronization

As there are still synchronization errors in the bistatic radar data, further processing steps must be carried out frame-wise before creating the bistatic images. To eliminate the time drift and offset, as well as the phase errors described in the signal model in Section II, the steps from [27] are conducted for every chirp sequence, using the localization results gained by the PF processing.

The relative frequency offset \delta _{t,n_{\mathrm{F}}}, also referred as time drift, can be estimated from the linear change of the LoS beat frequency peak along the slow time dimension. With the knowledge of the relative radar positions from the PF processing, the frame-wise initial time offset \Delta \tau _{0,n_{\mathrm{F}}}, which can be considered as constant distance offset in range dimension, can be calculated by comparing the LoS beat distance with the geometry estimates. To achieve phase coherence between the bistatic nodes, the LO phase offsets between both radars \Delta \Theta _{k,n_{\mathrm{F}}} need to be estimated for every chirp in each frame. This is done be a comparing the LoS peak phases of both bistatic signals. Subsequently, a \pi-unwrapping along the slow-time dimension must be executed for the complete bistatic SAR data due to the reduced ambiguity. A detailed description is given in [27], where also a method for the compensation of the fast-time phase noise \phi _{\text{pn},k,n_{\mathrm{F}},1/2} is given which we apply to the bistatic dataset as well.

This yields the synchronized bistatic signals from radar 2 to radar 1 \tilde{s}_{\mathrm{B},k,n_{\mathrm{A}},n_{\mathrm{F}}}^{2\to 1}(t) and from radar 1 to radar 2 \tilde{s}_{\mathrm{B},k,n_{\mathrm{A}},n_{\mathrm{F}}}^{1\to 2}(t). Additionally, the monostatic signal of the mobile radar s_{\mathrm{B},k,n_{\mathrm{A}},n_{\mathrm{F}}}^{2\to 2}(t) can be used to create a monostatic SAR image.

B. Back-Projection

The SAR images are then calculated using the back-projection algorithm. Therefore, the fast-time fourier transform of the beat signals is performed, giving \begin{align*} S_{\mathrm{B},k,n_{\mathrm{A}},n_{\mathrm{F}}}^{1\to 2/2\to 1}(f) &= \mathcal {F}\left\lbrace \tilde{s}_{\mathrm{B},k,n_{\mathrm{A}},n_{\mathrm{F}}}^{1\to 2/2\to 1}(t)\right\rbrace \tag{47}\\ S_{\mathrm{B},k,n_{\mathrm{A}},n_{\mathrm{F}}}^{2\to 2}(f) &= \mathcal {F}\left\lbrace s_{\mathrm{B},k,n_{\mathrm{A}},n_{\mathrm{F}}}^{2\to 2}(t)\right\rbrace. \tag{48} \end{align*}

View SourceRight-click on figure for MathML and additional features.Using the antenna position estimates, the hypothetical bistatic and monostatic distances along the SAR trajectory can be calculated for every pixel \vec{p}(x,y) in the image scene: \begin{align*} d_{\text{hyp},k,n_{\mathrm{A}},n_{\mathrm{F}}}^{i\to j}(x,y) =& \left\Vert \vec{p}(x,y) - \vec{p}_{\text{TX},i,k,n_{\mathrm{F}}}\right\Vert \\ &+ \left\Vert \vec{p}_{\text{RX},j,k,n_{\mathrm{A}},n_{\mathrm{F}}} - \vec{p}(x,y)\right\Vert, \tag{49} \end{align*}
View SourceRight-click on figure for MathML and additional features.
with \lbrace i,j\rbrace \in [\lbrace 1,2\rbrace,\lbrace 2,1\rbrace,\lbrace 2,2\rbrace ]. The fast-time spectrum is then interpolated linearly in phase and amplitude at the pixels' hypothetical beat frequency \begin{equation*} f_{\mathrm{B},k,n_{\mathrm{A}},n_{\mathrm{F}}}^{i\to j}(x,y) = -\mu \frac{d_{\text{hyp},k,n_{\mathrm{A}},n_{\mathrm{F}}}^{i\to j}(x,y)}{\mathrm{c}_{0}}. \tag{50} \end{equation*}
View SourceRight-click on figure for MathML and additional features.
Finally, the SISO SAR image of the n_{\mathrm{A}}-th RX channel of the receiving radar node j is then calculated by \begin{align*} I_{\text{SAR},n_{\mathrm{A}}}^{i\to j} (x,y) =& \sum _{n_{\mathrm{F}}=0}^{N_{\mathrm{F}}-1}\sum _{k=0}^{N_{\text{ch}}-1} S_{\mathrm{B},k,n_{\mathrm{A}},n_{\mathrm{F}}}^{i\to j}(f_{\mathrm{B},k,n_{\mathrm{A}},n_{\mathrm{F}}}^{i\to j}(x,y))\\ & \cdot \mathrm{e}^{2\pi \mathrm{j}f_{\mathrm{c}}\frac{d_{\text{hyp},k,n_{\mathrm{A}},n_{\mathrm{F}}}^{i\to j}(x,y)}{\mathrm{c}_{0}}}, \tag{51} \end{align*}
View SourceRight-click on figure for MathML and additional features.
where f_{\mathrm{c}} = f_{0} + B/2 is the center frequency of the FMCW chirp.

C. Bistatic PSF

First, an investigation of the bistatic near-field PSFs is conducted to validate the performance of the trajectory estimation and synchronization by comparing the images to the ideal simulations. Thus, the scenario shown in Fig. 2 is adopted for this measurement. Fig. 8 shows a picture of the laboratory setup. To obtain a point-like target, a small metal rod with a diameter of 2.0cm is used. The results of these measurements are illustrated in Fig. 10. The extend of the PSFs is comparable to the ideal simulation results. The peak positions are slightly shifted from the actual target positions by a few centimeters due to residual localization and synchronization errors, which is acceptable for most applications. A significant difference between the simulation and the measurement can be seen in the image of the bottom right target (number 5), where a lot of signal energy is spread over the bottom right corner. This effect is due to the direct path interference (DPI), but it can be suppressed by advanced DPI suppression techniques [47].

Figure 10. - 
            Bistatic SISO SAR images $I_{\text{SAR},0}^{2\to 1} (x,y)$ of a metal rod target at five different postions. The triangles indicate the ground truth positions of the targets, measured by the optical reference system. The dashed cyan line shows the TX antennas positions and the magenta circle the stationary RX antenna position.
Figure 10.

Bistatic SISO SAR images I_{\text{SAR},0}^{2\to 1} (x,y) of a metal rod target at five different postions. The triangles indicate the ground truth positions of the targets, measured by the optical reference system. The dashed cyan line shows the TX antennas positions and the magenta circle the stationary RX antenna position.

D. Complex Target Scene

After validating the performance of the direct path processing to have a sufficient quality of coherence in time and space, the differences in the imaging performance of complex targets between monostatic and bistatic SAR recordings are investigated. Due to the results of Section IV, constellations where radar 2 moves towards radar 1 where chosen, as this increases the localization performance. To increase the signal-to-noise ratio (SNR), all RX channels of the receiving radar are summed up coherently. The resulting SIMO SAR images are calculated as \begin{equation*} I_{\text{SAR}}^{i\to j} (x,y) = \sum _{n_{\mathrm{A}}=0}^{N_{\text{RX}}-1} I_{\text{SAR},n_{\mathrm{A}}}^{i\to j} (x,y). \tag{52} \end{equation*}

View SourceRight-click on figure for MathML and additional features.A bicycle and two scooters where placed in the imaging scene (see Fig. 11 [a]). Both can be considered relevant targets for automotive radar applications. A comparison of the monostatic SAR image in Fig. 11(c) and the bistatic image in Fig. 11(b) confirms the results of the bistatic RCS simulations in Section III. The frame of the bicycle contributes significantly more to the radar reflections in the bistatic case than in the monostatic case, where the handlebar and the seat post are essentially the main reflection sources. To obtain a quantitative measure of the monostatic and bistatic reflections, the unnormalized signal energy of the part of the image surrounded by the white rectangle in Fig. 11(b) and (c) is calculated. The energy in the bistatic image is about 1dB less than in the monostatic image, justifying the benefit of multiperspective SAR imaging. Additionally, a multiperspective SAR image of a person is recorded (see Fig. 12 [a]). While most of the reflected energy originates from the body, the arms are more emphasized in the bistatic images (Fig. 12 [b]) than in the monostatic ones (Fig. 12[c]). The monostatic signal energy in the relevant area is 3dB higher compared to the bistatic SAR image in the case of a person as radar target. Both experiments state the additional information that is gathered from the bistatic path.

Figure 11. - 
            Bistatic outdoor SAR imaging scene with a bicycle and two scooters (a). The resulting bistatic (b) and monostatic (c) images are plotted together with the radar positions.
Figure 11.

Bistatic outdoor SAR imaging scene with a bicycle and two scooters (a). The resulting bistatic (b) and monostatic (c) images are plotted together with the radar positions.

Figure 12. - 
            SAR measurement scene of a person in front of the bistatic constellation (a) with the bistatic (b) and monostatic (c) images.
Figure 12.

SAR measurement scene of a person in front of the bistatic constellation (a) with the bistatic (b) and monostatic (c) images.

SECTION VI.

Conclusion

This paper presents a precise bistatic self-localization scheme, utilizing the direct path signals between the two involved radar nodes and the processing chain, which is required to facilitate bistatic and solely self-localization based SAR imaging with uncoupled radars. In our theoretical framework, we note the difference between monostatic and bistatic SAR imaging as well as effects resulting from the near-field character of the investigated setups. The localization algorithm, based on a particle filter, is presented in detail. Its derived implementation enables the use of absolute phase measurements, as the PF is suited for multimodal measurement functions and is therefore capable of handling phase ambiguities. Measurement evaluations of the localization algorithm demonstrate an precision of better than 2cm in total for the positioning estimates which is comparable to the EKF approach. In particular, the radial components of position and velocity can be estimated very precisely due to the displacement information within the absolute phases. Therefore, the estimation performance improves significantly for SAR constellations where the radar nodes move towards each other. Subsequently, based on the linear extrapolated positioning results after the last filter iteration and with the use of the synchronization scheme introduced in our previous work, monostatic and bistatic SAR imaging over 1.5m-long trajectories was demonstrated. With linear trajectories, the spatial and temporal coherence of the bistatic dataset was sufficient for adequate focusing of the images without applying any autofocusing techniques. The measurement results of this SAR processing further indicate the benefit of multiperspective imaging, as the bistatic images yielded additional information about complex targets in the scene. Based on this first bistatic SAR experiments with non-static uncoupled radar nodes, the proposed processing scheme can be tested with more complex scenarios, e.g. with curved apertures or movement of both nodes, in the future work. The proposed setup and algorithm operate in a two-dimensional space, which is sufficient for adoption in automotive scenarios. However, transferring the approach to an UAV-based multistatic radar remote sensing system will require an extension of the hardware and algorithms to support three-dimensional pose estimation, which will also be the subject of future work.

References

References is not available for this document.