Introduction
The concept of bistatic synthetic aperture radar (BiSAR) is already prominent in the field of spaceborne SAR, as shown in [1] and [2], as well as in applications in airborne SAR [3], [4]. In addition, combinations of spaceborne and airborne SAR systems have been utilized [5]. BiSAR systems have the potential to increase geometric diversity [6], [7], so the investigated area can be observed from different bistatic angles, increasing the information about the image scene by utilizing the angle-dependent bistatic radar cross-section (RCS) [8]. Recent developments show an increasing interest in the deployment of SAR on drone-based platforms, known as UAV-borne SAR [9], [10], [11], [12], especially for multistatic radar constellations with multiple platforms [10], [13], [14]. Such systems are more flexible for the investigation of a specific scene of interest and less expensive than satellite and airborne-based platforms. Another upcoming development is the use of BiSAR for automotive applications [15]. However, the establishment of UAV-borne or automotive BiSAR raises new questions, which have not yet been fully solved and are subject to ongoing research.
In a bistatic constellation with two independently moving radar nodes, it is important to ensure spatial and temporal coherence between the transmitter (TX) and the receiver (RX) while sampling the image scene at the different positions along the SAR trajectory [16]. Spatial coherence means that the antenna positions must be known precisely with an accuracy of
Additionally and in contrast to monostatic SAR systems or distributed systems that share a common reference, time and phase coherence between the radar nodes must be established to process the bistatic signals because each radar has its own reference oscillator [20]. Spaceborne and airborne systems use ultra stable oscillators [21]. Those relax the requirements of the synchronization link, allowing a pulsed alternated scheme [22], but they are more expensive than conventional quartz oscillators. Therefore, this is not a proper solution for UAV or automotive radars, as the required quantities are much bigger and unit price is a limiting factor. Wireless synchronization approaches to automotive radar networks are presented in [23] and [24]. These works target the case of radars mounted on a common platform (i.e., a car) and cannot be applied to non-static scenarios in which the nodes move independently. In [25], a synchronization approach for a phase-modulated continuous wave (PMCW) signal is presented and suggested for use in multistatic SAR applications. A solution that does not require a presynchronization is proposed in [26] by the use of a stationary repeater node. Both approaches lack the opportunity for radar-based self-localization.
Combined approaches for the synchronization and localization of uncoupled bi-/multistatic radar networks with independently moving nodes are presented in [15], [28]. The method in [28] utilizes an additional degree of freedom per target in the unsynchronized radar data of a 2D localization scenario to estimate unknown antenna positions and timing offsets [28]. The simulated performance was in the decimeter range for the localization and in the range of nanoseconds for time synchronization, which is not suited for BiSAR processing. Tagliaferri et al. propose a problem solution in [15] that utilizes coarse GNSS estimates in combination with a coregistration of the monostatic images and the processing of common features in all images to obtain precise position estimate and phase synchronization [15]. Hence, a coherent processing of the mono- and bistatic images was achieved, but the method was showcased for SAR trajectories limited to 9cm.
Our previous work in [27] presented a synchronization approach to establish phase coherency in an uncoupled bistatic radar network using the direct path between the radar pair in the bistatic network as reference link. This has enabled coherent bistatic processing of frequency modulated continuous wave (FMCW) chirp-sequence waveforms. We will extend this approach by an integrated self-contained localization and tracking algorithm which uses the same line-of-sight radar signal that is used for phase synchronization.
In [29], an extended Kalman filter (EKF) algorithm was introduced to process the line-of-sight (LoS) signals along multiple frames with a localization accuracy of less than 1cm. This filter was able to process only phase differences, which yield the angle information of the incoming waves at both radar nodes. A processing of the absolute phases would give precise information about the relative displacement between the antenna positions of successive chirps in fractions of the wavelength [30]. However, such absolute phase measurements are highly ambiguous due to the
The paper is organized as follows: The systems architecture and applied system models are introduced in Section II. In Section III, the characteristics of near-field BiSAR systems are investigated. The proposed particle filter (PF) is explained in detail, and its performance is evaluated in Section IV. Finally, a short explanation of the subsequent steps to synchronize the bistatic data is given and we demonstrate the performance of the proposed algorithm by applying the self-constraint trajectory estimation algorithm to indoor and outdoor bistatic SAR measurements with uncoupled non-static radar nodes in Section V and generate the multistatic SAR images.
System Model
The concept of integrated self-contained trajectory estimation involves using radar sensors jointly for radar self-localization, synchronization, and multistatic SAR imaging in a non-static, uncoupled bistatic radar network. Fig. 1 shows a block diagram of this bistatic network setup. For the concept to work, a line-of-sight link between the radar pairs is necessary. This requirement restricts possible geometric constellations to those where each radar's field of view includes the respective other radar and the target that should be recorded at all positions along the mobile radar's synthetic aperture. HAlthough this constraint may result in geometries that are not optimal for BiSAR resolution, our approach offers significant benefits for multi-platform multistatic radar systems like a reduction of required hardware components and integration effort. The contributions of this work are:
theoretical investigation of near-field BiSAR resolution
assessment of velocity errors in the trajectory estimation on BiSAR processing
simulation-based evaluation of the bistatic RCS of complex target
novel integrated self-contained tracking algorithm for trajectory estimation in an uncoupled radar network
enable simultaneous monostatic and bistatic SAR processing refered to as multistatic processing in the paper
experimental demonstration of self-contained radar-based synchronization, trajectory estimation and multistatic SAR imaging
Sketch of the bistatic SIMO radar setup, where radar node 2 spans a SAR trajectory, which is indicated by the dashed green line. The second radars relative 2D position is given by the coordinates
A. Near-Field Bistatic SAR Geometry
During the paper, one radar node is assumed to be static, while the other node is assumed to be moving with a constant velocity
\begin{align*}
\vec{p}_{\text{TX},2} &= \begin{pmatrix}\cos \left(\gamma \right) & -\sin \left(\gamma \right)\\
\sin \left(\gamma \right) & \cos \left(\gamma \right) \end{pmatrix} \vec{p}_{\text{TX},1} + \vec{p}_{\mathrm{R},2} \tag{1}\\
\vec{p}_{\text{RX},n_{\mathrm{A}},2} &= \begin{pmatrix}\cos \left(\gamma \right) & -\sin \left(\gamma \right)\\
\sin \left(\gamma \right) & \cos \left(\gamma \right) \end{pmatrix} \vec{p}_{\text{RX},n_{\mathrm{A}},1} + \vec{p}_{\mathrm{R},2}. \tag{2}
\end{align*}
\begin{align*}
\vec{p}_{\mathrm{R},2}(t) = \begin{pmatrix}x_{\mathrm{R},0}\\
y_{\mathrm{R},0} \end{pmatrix} + \vec{v}_{\mathrm{R}} \cdot t, \tag{3}
\end{align*}
The model parameter descriptions are gathered in Table 1.
B. Signal Model
During the SAR integration time, multiple frames are transmitted and received by both radars simultaneously. Every frame consists of a chirp sequence with
\begin{equation*}
t_{i,n_{\mathrm{F}}}(t) = \left(1+\delta _{t,i,n_{\mathrm{F}}}\right)t + \Delta \tau _{0,i,n_{\mathrm{F}}}, \tag{4}
\end{equation*}
In addition to these systematic errors in the bistatic beat signals, each locally generated TX signal suffers from random phase errors. This behaviour can be modelled chirp-wise by a zero-mean time-dependent phase noise part
Applying the systematic and stochastic error sources to a FMCW signal model, as was done comprehensively in [30], yields the following expressions for the bistatic beat signal phases from radar 2 to 1
\begin{align*}
\Phi _{\mathrm{B},k,n_{\mathrm{F}}}^{2\to 1}&(t,\tau) \\
=&2\pi \left[-f_{0}\delta _{t,n_{\mathrm{F}}} - \mu \tau + \mu \left(\Delta \tau _{0,n_{\mathrm{F}}} + kT_{\text{rep}}\delta _{t,n_{\mathrm{F}}}\right) \right] t \\
&-2\pi \mu \delta _{t,n_{\mathrm{F}}}t^{2} - 2\pi f_{0}\tau + \phi _{\text{pn},k,n_{\mathrm{F}},2}(t-\tau) \\
\ &- \phi _{\text{pn},k,n_{\mathrm{F}},1}(t) + \underbrace{\Theta _{k,n_{\mathrm{F}},2} - \Theta _{k,n_{\mathrm{F}},1}}_{=\Delta \Theta _{k,n_{\mathrm{F}}}} \tag{5}
\end{align*}
\begin{align*}
\Phi _{\mathrm{B},k,n_{\mathrm{F}}}^{1\to 2}&(t,\tau) \\
=&2\pi \left[f_{0}\delta _{t,n_{\mathrm{F}}} - \mu \tau - \mu \left(\Delta \tau _{0,n_{\mathrm{F}}} + kT_{\text{rep}}\delta _{t,n_{\mathrm{F}}}\right) \right] t \\
&-2\pi \mu \delta _{t,n_{\mathrm{F}}}t^{2} -2\pi f_{0}\tau + \phi _{\text{pn},k,n_{\mathrm{F}},1}(t-\tau) \\
&- \phi _{\text{pn},k,n_{\mathrm{F}},2}(t) - \Delta \Theta _{k,n_{\mathrm{F}}}, \tag{6}
\end{align*}
\begin{align*}
\tau _{\text{LoS},n_{\mathrm{A}}}^{2\to 1} &= \frac{\left\Vert \vec{p}_{\text{RX},n_{\mathrm{A}},1}-\vec{p}_{\text{TX},2}\right\Vert }{\mathrm{c}_{0}} \tag{7}\\
\tau _{\text{LoS},n_{\mathrm{A}}}^{1\to 2} &= \frac{\left\Vert \vec{p}_{\text{RX},n_{\mathrm{A}},2}-\vec{p}_{\text{TX},1}\right\Vert }{\mathrm{c}_{0}} \tag{8}
\end{align*}
\begin{align*}
\tau _{\ell,n_{\mathrm{A}}}^{2\to 1} &= \frac{\left\Vert \vec{p}_{\mathrm{T},\ell }-\vec{p}_{\text{TX},2}\right\Vert + \left\Vert \vec{p}_{\text{RX},n_{\mathrm{A}},1} - \vec{p}_{\mathrm{T},\ell }\right\Vert }{\mathrm{c}_{0}} \tag{9}\\
\tau _{\ell,n_{\mathrm{A}}}^{1\to 2} &= \frac{\left\Vert \vec{p}_{\mathrm{T},\ell }-\vec{p}_{\text{TX},1}\right\Vert + \left\Vert \vec{p}_{\text{RX},n_{\mathrm{A}},2} - \vec{p}_{\mathrm{T},\ell }\right\Vert }{\mathrm{c}_{0}}, \tag{10}
\end{align*}
\begin{align*}
&s_{\mathrm{B},k,n_{\mathrm{A}},n_{\mathrm{F}}}^{2\to 1} (t) = A_{\text{RX},LoS}^{2\to 1}\cdot \mathrm{e}^{\mathrm{j}\Phi _{\mathrm{B},k,n_{\mathrm{F}}}^{2\to 1}(t,\tau _{\text{LoS},n_{\mathrm{A}}}^{2\to 1})} \\
&\quad + \sum _{\ell } A_{\text{RX},\ell }^{2\to 1}\cdot \mathrm{e}^{\mathrm{j}\Phi _{\mathrm{B},k,n_{\mathrm{F}}}^{2\to 1}(t,\tau _{\ell,n_{\mathrm{A}}}^{2\to 1})} + n_{k,n_{\mathrm{A}},n_{\mathrm{F}},1}(t)\tag{11}\\
&s_{\mathrm{B},k,n_{\mathrm{A}},n_{\mathrm{F}}}^{1\to 2} (t) = A_{\text{RX},LoS}^{1\to 2}\cdot \mathrm{e}^{\mathrm{j}\Phi _{\mathrm{B},k,n_{\mathrm{F}}}^{1\to 2}(t,\tau _{\text{LoS},n_{\mathrm{A}}}^{1\to 2})} \\
&\quad + \sum _{\ell } A_{\text{RX},\ell }^{1\to 2}\cdot \mathrm{e}^{\mathrm{j}\Phi _{\mathrm{B},k,n_{\mathrm{F}}}^{1\to 2}(t,\tau _{\ell,n_{\mathrm{A}}}^{1\to 2})} + n_{k,n_{\mathrm{A}},n_{\mathrm{F}},2}(t), \tag{12}
\end{align*}
Furthermore, if the rate of change of the phase noise term
\begin{align*}
&\phi _{\text{pn},k,n_{\mathrm{F}},2}(t-\tau) - \phi _{\text{pn},k,n_{\mathrm{F}},1}(t) \\
&\approx \phi _{\text{pn},k,n_{\mathrm{F}},2}(t) - \phi _{\text{pn},k,n_{\mathrm{F}},2}(t) \\
&\approx -\left[\phi _{\text{pn},k,n_{\mathrm{F}},1}(t-\tau) - \phi _{\text{pn},k,n_{\mathrm{F}},2}(t)\right]. \tag{13}
\end{align*}
The beat frequencies and peak phases that correspond to a given channel path with ToF
\begin{align*}
f_{\mathrm{B},k,n_{\mathrm{F}}}^{2\to 1} =& -f_{0}\delta _{t,n_{\mathrm{F}}} - \mu \tau + \mu (\Delta \tau _{0,n_{\mathrm{F}}} + kT_{\text{rep}}\delta _{t,n_{\mathrm{F}}}) \\
&+ \epsilon _{f,\text{pn},k,n_{\mathrm{F}}} + \epsilon _{f,\text{awgn},k,n_{\mathrm{F}},1}\tag{14}\\
f_{\mathrm{B},k,n_{\mathrm{F}}}^{1\to 2} =& f_{0}\delta _{t,n_{\mathrm{F}}} - \mu \tau - \mu (\Delta \tau _{0,n_{\mathrm{F}}} + kT_{\text{rep}}\delta _{t,n_{\mathrm{F}}}) \\
&- \epsilon _{f,\text{pn},k,n_{\mathrm{F}}} + \epsilon _{f,\text{awgn},k,n_{\mathrm{F}},2}, \tag{15}
\end{align*}
\begin{align*}
\varphi _{\mathrm{B},k,n_{\mathrm{F}}}^{2\to 1} &= -2\pi f_{0} \tau + \Delta \Theta _{k,n_{\mathrm{F}}} + \epsilon _{\varphi,\text{pn},k,n_{\mathrm{F}}} + \epsilon _{\varphi,\text{awgn},k,n_{\mathrm{F}},1}\tag{16}\\
\varphi _{\mathrm{B},k,n_{\mathrm{F}}}^{1\to 2} &= -2\pi f_{0} \tau - \Delta \Theta _{k,n_{\mathrm{F}}} - \epsilon _{\varphi,\text{pn},k,n_{\mathrm{F}}} + \epsilon _{\varphi,\text{awgn},k,n_{\mathrm{F}},2}, \tag{17}
\end{align*}
Bistatic Sar Theory
Relevant to this paper are near-field bistatic geometries in the 2D plane, as shown in Fig. 2. Hence, this section introduces a comprehensive discussion of the metrics of those systems, i.e., the resolution comparison of monostatic and bistatic SAR images and the angle-dependent bistatic RCS. For simplicity, the investigated scenarios in this section are limited to a moving transmitter (TX) with constant velocity
Geometry of a 2D bistatic SAR scene, where the transmitter (TX) is moved along a linear SAR trajectory 1.5 m in length with constant velocity, whereas the receiver (RX) position is fixed. Iso-range and iso-Doppler contours for a target at position number 3 are drawn in the image scene with the assumption that the TX is located at the SAR trajectories' center position. The system's bandwidth is 1.5 GHz, and the carrier wavelength is 3.94 mm.
A. Bistatic SAR Resolution
The resolution of an FMCW radar SAR system can be divided into range and Doppler resolution. As discussed in [6], the point spread function (PSF) of a point target in the imaging scene is increasingly spatially variant as the radar nodes approach the image scene (see Fig. 2). Cardillo derived a gradient-based approach to approximate the spatial-dependent range and cross-range resolution of a bistatic SAR system in [32]. Therefore, this method is introduced and used to quantify the resolution.
The bistatic range covers the distance from the TX antenna to the target and from the target to the RX antenna:
\begin{equation*}
R_{\mathrm{b}}(\vec{p}_{\mathrm{T}}) = \left\Vert \vec{p}_{\mathrm{T}}-\vec{p}_{\text{TX}}\right\Vert + \left\Vert \vec{p}_{\text{RX}}-\vec{p}_{\mathrm{T}}\right\Vert, \tag{18}
\end{equation*}
\begin{align*}
f_{\mathrm{D}}(\vec{p}_{\mathrm{T}}) &= \frac{1}{\lambda _{0}}\left\langle \vec{v}_{\text{TX}},\vec{n}_{\mathrm{TX,T}}\right\rangle \\
\ &\ \text{with} \quad \vec{n}_{\mathrm{TX,T}} = \frac{\vec{p}_{\mathrm{T}}-\vec{p}_{\text{TX}}}{\left\Vert \vec{p}_{\mathrm{T}}-\vec{p}_{\text{TX}}\right\Vert },\tag{19}
\end{align*}
\begin{align*}
\vec{\nabla } R_{\mathrm{b}}= \left(\frac{\partial }{\partial x}R_{\mathrm{b}}, \frac{\partial }{\partial y}R_{\mathrm{b}}\right)^{\mathrm{T}},\tag{20}\\
\vec{\nabla } f_{\mathrm{D}} = \left(\frac{\partial }{\partial x}f_{\mathrm{D}}, \frac{\partial }{\partial y}f_{\mathrm{D}}\right)^{\mathrm{T}}, \tag{21}
\end{align*}
\begin{equation*}
\vec{n}_{\vec{\nabla }R_{\mathrm{b}}} = \frac{\vec{\nabla } R_{\mathrm{b}}}{\left|\vec{\nabla } R_{\mathrm{b}}\right|}\quad \text{and}\quad \vec{n}_{\vec{\nabla }f_{\mathrm{D}}} = \frac{\vec{\nabla } f_{\mathrm{D}}}{\left|\vec{\nabla } f_{\mathrm{D}} \right|} \tag{22}
\end{equation*}
\begin{equation*}
\vec{\delta }_{r} = 0.89\cdot \frac{1/B}{\left|\vec{\nabla } R_{\mathrm{b}}\right|}\cdot \vec{n}_{\vec{\nabla }R_{\mathrm{b}}}, \tag{23}
\end{equation*}
\begin{equation*}
\vec{\delta }_{\mathrm{D}} = 0.89\cdot \frac{1/T_{\text{int}}}{\left|\vec{\nabla } f_{\mathrm{D}} \right|}\cdot \vec{n}_{\vec{\nabla }f_{\mathrm{D}}}, \tag{24}
\end{equation*}
\begin{equation*}
\delta _{\text{xD}} = \frac{\delta _{\mathrm{r}}}{\sin \Theta }. \tag{25}
\end{equation*}
\begin{equation*}
\delta _{\mathrm{cD,nf}} = {0.89}\frac{7\lambda _{0}}{\Psi _{\mathrm{r}}^{2}}. \tag{26}
\end{equation*}
Comparison of cross-Doppler resolution between the analytical formulas of equations (25) and (26) and the numerical results from a back-projection evaluation for point targets swept along the x (a) and y (b) axis referring to the scene of Fig. 2. Also monostatic cross-Doppler resolution derived from the back-projection images is plotted.
B. Impact of Self-Localization Errors
To assess the required self-localization accuracy of the radar-based positioning algorithm for the BiSAR processing, the influence of erroneous trajectory estimation on the imaging performance needs to be investigated. For automotive SAR scenarios, the effects of constant position and velocity estimation errors on the SAR imaging performace are discussed in [19], [35]. While moderate positional offsets have a negligible influence on the processing results, errors in the estimated velocity lead to a displacement of the estimated target position in the direction of the Doppler gradient
The coherent integration loss, which is the relationship between the actual targets peak power and the peak power of a perfectly focused target, is used as a metric to assess SAR image quality. The result of this simulation is shown in Fig. 4. It indicates that a velocity error component along the trajectory significantly affects defocusing compared to an error perpendicular to the trajectory direction. Additionally, the degree of defocusing strongly depends on the target's position. These findings provide an indication of the required precision of the proposed particle filter's tracking results in the measurement evaluation section. While a higher error perpendicular to the trajectory is permissible, the integration loss due to a velocity error in the trajectory direction decreases significantly faster, especially for target 5 with a larger bistatic angle.
C. Bistatic RCS
To complete the brief theoretical discussion on near-field bistatic SAR, emphasis will be placed on the differences in scattering characteristics of complex targets in monostatic and bistatic SAR measurements, which are described by their RCS values.
The bistatic RCS is a measure for the scattered energy of an illuminated target into the direction of a receiver that is spatially separated from the transmitter's position and can be calculated by
\begin{equation*}
\sigma = \lim _{r\to \infty } 4\pi r^{2} \frac{\left|E_{\mathrm{s}}\right|}{\left|E_{\mathrm{i}}\right|}, \tag{27}
\end{equation*}
Bistatic RCS simulation of a bicycle model using CST Studio Suit 2024 ray-tracing tool. The targets was excited from an azimuth angle of 45°, and the corresponding bistatic RCS was observed over a range from −90° to 90° (b). The observed rays for the monostatic case (45°) (a), and for a bistatic obervation angle of −45° (c) are exemplarily shown.
In addition to this simulation, several other publications support the information gain of bistatic radar imaging compared to only monostatic observation. In [7], it is shown that geometries with weak monostatic scattering behavior often yield large bistatic returns. A practical use case is given in [40], where improvements in the detection of drones with multistatic radar networks are investigated.
Particle Filter-Based Localization
An important step for SAR processing is to gain precise estimates of the radar's positions over the whole SAR trajectory. The localization of static constellations using single FMCW chirp sequence frames was exhaustively discussed in our previous work [27]. Now, the aim is to derive an PF-based algorithm capable of continuously estimating the radars' pose and velocity over multiple chirp-sequence frames in a non-static network with an relative precision that enables SAR processing over long apertures. We start with the theory of the PF algorithm, followed by the derivation of the state model and the measurement functions of the bistatic radars system. This section is concluded by a verification of the proposed PF algorithm using radar measurements in conjunction with an optical reference system supplying ground truth positions.
A. Theory
The basic equations behind the PF originate in the Bayes filter algorithm. This algorithm generally has two quantities: the observations
The conditional probability density function (PDF) of the state vector, in [41] referred to as belief
\begin{equation*}
\text{bel}\left(\vec{x}_{D_{i}}\right) = p\left(\vec{x}_{D_{i}}\vert \vec{z}_{D_{0}:D_{i}}\right), \tag{28}
\end{equation*}
\begin{equation*}
\overline{\text{bel}}\left(\vec{x}_{D_{i}}\right) = \int p\left(\vec{x}_{D_{i}}\vert \vec{x}_{D_{i-1}}\right) \text{bel}\left(\vec{x}_{D_{i-1}}\right) \mathrm{d}\vec{x}_{D_{i-1}}. \tag{29}
\end{equation*}
\begin{equation*}
\text{bel}\left(\vec{x}_{D_{i}}\right) = \eta \cdot p\left(\vec{z}_{D_{i}}\vert \vec{x}_{D_{i}}\right) \cdot \overline{\text{bel}}\left(\vec{x}_{D_{i}}\right), \tag{30}
\end{equation*}
This basic algorithm is fundamental to a variety of stochastic filters that differ mainly in their way of representing the densities
\begin{equation*}
\mathcal {X}_{D_{i}} = \left\lbrace \vec{x}_{D_{i}}^{\;n_{\mathrm{P}}}, w_{D_{i}}^{n_{\mathrm{P}}}\right\rbrace _{n_{\mathrm{P}}=1}^{N_{\mathrm{P}}}, \tag{31}
\end{equation*}
\begin{equation*}
w_{D_{i}}^{n_{\mathrm{P}}} \propto w_{D_{i-1}}^{n_{\mathrm{P}}}\cdot \frac{p\left(\vec{z}_{D_{i}}\vert \vec{x}_{D_{i}}^{\;n_{\mathrm{P}}}\right)p\left(\vec{x}_{D_{i}}^{\;n_{\mathrm{P}}}\vert \vec{x}_{D_{i-1}}^{\;n_{\mathrm{P}}}\right)}{q\left(\vec{x}_{D_{i}}^{\;n_{\mathrm{P}}}\vert \vec{x}_{D_{i-1}}^{\;n_{\mathrm{P}}};\vec{z_{D_{i}}}\right)}, \tag{32}
\end{equation*}
\begin{equation*}
w_{D_{i}}^{n_{\mathrm{P}}} \propto w_{D_{i-1}}^{n_{\mathrm{P}}}\cdot p\left(\vec{z}_{D_{i}}\vert \vec{x}_{D_{i}}^{\;n_{\mathrm{P}}}\right). \tag{33}
\end{equation*}
\begin{equation*}
\text{bel}(\vec{x}_{D_{i}}) \approx \sum _{n_{\mathrm{P}}=1}^{N_{\mathrm{P}}} w_{D_{i}}^{n_{\mathrm{P}}} \delta \left(\vec{x}_{D_{i}} - \vec{x}_{D_{i}}^{\;n_{\mathrm{P}}}\right), \tag{34}
\end{equation*}
\begin{equation*}
N_{\mathrm{P,eff,}D_{i}} \approx \frac{1}{\sum _{n_{\mathrm{P}}=1}^{N_{\mathrm{P}}}\left(w_{D_{i}}^{n_{\mathrm{P}}}\right)^{2}}. \tag{35}
\end{equation*}
B. Implementation
First, the state vector is defined as
\begin{equation*}
\vec{x}_{D_{i}} = \begin{pmatrix}x_{\mathrm{R},D_{i}} & y_{\mathrm{R},D_{i}} & \dot{x}_{\mathrm{R},D_{i}} & \dot{y}_{\mathrm{R},D_{i}} & \gamma _{D_{i}} \end{pmatrix}^{\mathrm{T}}. \tag{36}
\end{equation*}
Bistatic radar constellation with two-channel SIMO radars, where three sub-apertures along the radar trajectory are sketched. In this example, four measurements per sub-aperture are taken and are processed during one filter iteration. The different LoS path distances are drawn.
To predict the particles of the
\begin{equation*}
\vec{x}_{D_{i}}^{\;n_{\mathrm{P}}} = \underbrace{\begin{pmatrix}1 & 0 & \Delta T & 0 & 0 \\
0 & 1 & 0 & \Delta T & 0 \\
0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 & 1 \end{pmatrix}}_{\mathbf {F}\left[\Delta T\right]} \vec{x}_{D_{i-1}}^{\;n_{\mathrm{P}}} + \vec{w}_{D_{i}}, \tag{37}
\end{equation*}
\begin{align*}
&\mathbf {P_{\mathrm{D_{i}}}} = \vec{\alpha }_{\mathrm{P}}\cdot \frac{\cdot N_{\mathrm{P}}}{N_{\mathrm{P,eff,}D_{i}}} \\
& \cdot \text{diag}\begin{pmatrix}\sigma _{x_{\mathrm{R}},D_{i}}^{2} & \sigma _{y_{\mathrm{R}},D_{i}}^{2} & \sigma _{\dot{x}_{\mathrm{R}},D_{i}}^{2} & \sigma _{\dot{y}_{\mathrm{R}},D_{i}}^{2} & \sigma _{\gamma,,D_{i}}^{2} \end{pmatrix}, \tag{38}
\end{align*}
\begin{equation*}
\tilde{f}_{n_{\mathrm{A}},D_{i},n_{\text{sub}}}^{\text{RT}} = -\mu \left(\tau _{n_{\mathrm{A}},D_{i},n_{\text{sub}}}^{2\to 1} + \tau _{n_{\mathrm{A}},D_{i},n_{\text{sub}}}^{1\to 2}\right) \tag{39}
\end{equation*}
\begin{equation*}
\tilde{\varphi }_{n_{\mathrm{A}},D_{i},n_{\text{sub}}}^{\text{RT}} = -2\pi f_{0} \left(\tau _{n_{\mathrm{A}},D_{i},n_{\text{sub}}}^{2\to 1} + \tau _{n_{\mathrm{A}},D_{i},n_{\text{sub}}}^{1\to 2}\right), \tag{40}
\end{equation*}
The round-trip distances can be calculated from the measured frequencies using (39) by
\begin{equation*}
\tilde{d}_{n_{\mathrm{A}},D_{i},n_{\text{sub}}}^{\text{RT}} = -\mathrm{c}_{0}\frac{\tilde{f}_{n_{\mathrm{A}},D_{i},n_{\text{sub}}}^{\text{RT}}}{\mu }. \tag{41}
\end{equation*}
\begin{align*}
&p\left(\vec{f}_{D_{i}}^{\;\text{RT}}\vert \vec{x}_{D_{i}}^{\;n_{\mathrm{P}}}\right) = \sum _{n_{\text{sub}}=0}^{N_{\text{sub}}-1} \sum _{n_{\mathrm{A}}=0}^{N_{\text{RX}}-1} \\
&\quad \frac{1}{\sqrt{2\pi \sigma _{d_{\text{rt}}}^{2}}}\exp \left\lbrace -\frac{\left(d_{n_{\mathrm{A}},n_{\text{sub}}}^{\text{RT}}\left(\vec{x}_{D_{i}}^{\;n_{\mathrm{P}}}\right)-\tilde{d}_{n_{\mathrm{A}},D_{i},n_{\text{sub}}}^{\text{RT}}\right)^{2}}{2\sigma _{d_{\text{rt}}}^{2}}\right\rbrace, \tag{42}
\end{align*}
\begin{align*}
&p\left(\vec{\varphi }_{D_{i}}^{\;\text{RT}}\vert \vec{x}_{D_{i}}^{\;n_{\mathrm{P}}}\right) =\bigg \vert \sum _{n_{\text{sub}}=0}^{N_{\text{sub}}-1} \sum _{n_{\mathrm{A}}=0}^{N_{\text{RX}}-1} \\
&\quad \exp \left\lbrace \mathrm{j}\left[k d_{n_{\mathrm{A}},n_{\text{sub}}}^{\text{RT}}\left(\vec{x}_{D_{i}}^{\;n_{\mathrm{P}}}\right) + \tilde{\varphi }_{n_{\mathrm{A}},D_{i},n_{\text{sub}}}^{\text{RT}}\right]\right\rbrace \bigg \vert, \tag{43}
\end{align*}
\begin{align*}
&p\left(\vec{\varphi }_{D_{i}}^{\;\mathrm{2\to 1}}, \vec{\varphi }_{D_{i}}^{\;\mathrm{1\to 2}}\vert \vec{x}_{D_{i}}^{\;n_{\mathrm{P}}}\right) = \sum _{n_{\text{sub}}=0}^{N_{\text{sub}}-1} \\
&\ \left|\sum _{n_{\mathrm{A}}=0}^{N_{\text{RX}}-1} \exp \left\lbrace \mathrm{j}\left[k d_{n_{\mathrm{A}},n_{\text{sub}}}^{\mathrm{2\to 1}}\left(\vec{x}_{D_{i}}^{\;n_{\mathrm{P}}}\right) + \tilde{\varphi }_{n_{\mathrm{A}},D_{i},n_{\text{sub}}}^{\mathrm{2\to 1}}\right]\right\rbrace \right| \\
&\ \cdot \left|\sum _{n_{\mathrm{A}}=0}^{N_{\text{RX}}-1} \exp \left\lbrace \mathrm{j}\left[k d_{n_{\mathrm{A}},n_{\text{sub}}}^{\mathrm{1\to 2}}\left(\vec{x}_{D_{i}}^{\;n_{\mathrm{P}}}\right) + \tilde{\varphi }_{n_{\mathrm{A}},D_{i},n_{\text{sub}}}^{\mathrm{1\to 2}}\right]\right\rbrace \right|. \tag{44}
\end{align*}
The combined measurement function is given by
\begin{align*}
p\left(\vec{z}_{D_{i}}\vert \vec{x}_{D_{i}}^{\;n_{\mathrm{P}}}\right) =& \eta \cdot p\left(\vec{f}_{D_{i}}^{\;\text{RT}}\vert \vec{x}_{D_{i}}^{\;n_{\mathrm{P}}}\right) \cdot p\left(\vec{\varphi }_{D_{i}}^{\;\text{RT}}\vert \vec{x}_{D_{i}}^{\;n_{\mathrm{P}}}\right)^{2}\\
& \cdot p\left(\vec{\varphi }_{D_{i}}^{\;\mathrm{2\to 1}}, \vec{\varphi }_{D_{i}}^{\;\mathrm{1\to 2}}\vert \vec{x}_{D_{i}}^{\;n_{\mathrm{P}}}\right)^{2}, \tag{45}
\end{align*}
Maximum cuts of the unnormalized five-dimensional measurement function given in equation (45) showing slices of position
The general bootstrap PF algorithm can be implemented with the above derived set of equations. As previously mentioned, the first frame is processed by the localization scheme from [27]. This gives a coarse estimate of the state, from which the initial particle set is drawn. Subsequently, the state model of (37) is applied to all particles to predict the next state. This step is followed by a recalculation of the weights using (33) and (45).
To further decrease sample impoverishment, the effective number of particles is compared to a threshold in every filter iteration, and a resampling step is conducted if it falls below this threshold. A systematic resampling approach is used, as it offers high computational efficiency and a low sampling variance [41]. The threshold is set to
The state estimate at the
\begin{equation*}
\hat{\vec{x}}_{D_{i}} = {\mathbb {E}}\left[\mathcal {X}_{D_{i}}\right] = \frac{\sum _{n_{\mathrm{P}}} \vec{x}_{D_{i}}^{\;n_{\mathrm{P}}}\cdot w_{D_{i}}^{n_{\mathrm{P}}}}{\sum _{n_{\mathrm{p}}} w_{D_{i}}^{n_{\mathrm{P}}}}. \tag{46}
\end{equation*}
C. Localization Results
The described PF algorithm with the filter parameters shown in Table 3 is validated using measurements taken in the laboratory based on the simulation scenario in Fig. 2. The coordinate system used for localization corresponds to the one spanned by radar node 1 as described in the geometry part of Section II. The ground truth information of the filter state is provided by an optical reference system with a 3D root mean square positioning error of 1.0 mm. Therefore, each radar node is equipped with optical markers to obtain its position and orientation. Radar 2 is mounted on a linear actuator to generate SAR trajectories with constant velocities. In this setup, five different measurements were taken with a metal rod at different target positions indicated in Fig. 8. The applied radar parameters are given in Table 4. The filter was evaluated by 16 runs per dataset to show that the PF converges to a global best estimate. The filter estimates of dataset number 1 are plotted in Fig. 9, in which the spread between single filter runs is shown by the 95% confidence interval.
Indoor scenario for bistatic SAR measurements with PF-based trajectory estimation. The lab is equipped with an optical tracking system to provide ground truth data. A messing rod serves as a target and was deployed at different positions (1-5), which are marked by green dots.
Evaluation of 16 filter runs. The transparent area around the respective mean values shows the 95% confidence interval. The dashed lines show the ground truth data received by the optical reference system.
To assess the filter's localization performance, the root mean square errors (RMSE) for position, orientation, and relative velocity offset in the trajectory and cross-trajectory directions are calculated over all time samples and data sets. The recorded radar raw data is also processed using the chirp sequence-based approach from our previous work in [27] to demonstrate the achieved improvement. Additionally, the particle filter position estimates over time are fitted to a linear trajectory model, which further enhances self-localization given a linear movement with constant velocity. The results are shown in Table 5. A significant improvement in positioning accuracy compared to chirp sequence-based processing is achieved by the proposed algorithm. By evaluating the velocity estimates using the results given in Fig. 4 of Section III-B, an acceptable integration loss of less then 3dB in the SAR images can be expected, especially for the linearly fitted estimates.
Sar Imaging Using Bistatic Self-Localization
After successfully verifying the performance of the self-localization algorithm, the SAR capabilities of the system will be demonstrated. For this purpose,a linearly fitted model based on our particle filter state estimates, denoted as
A. Bistatic Synchronization
As there are still synchronization errors in the bistatic radar data, further processing steps must be carried out frame-wise before creating the bistatic images. To eliminate the time drift and offset, as well as the phase errors described in the signal model in Section II, the steps from [27] are conducted for every chirp sequence, using the localization results gained by the PF processing.
The relative frequency offset
This yields the synchronized bistatic signals from radar 2 to radar 1
B. Back-Projection
The SAR images are then calculated using the back-projection algorithm. Therefore, the fast-time fourier transform of the beat signals is performed, giving
\begin{align*}
S_{\mathrm{B},k,n_{\mathrm{A}},n_{\mathrm{F}}}^{1\to 2/2\to 1}(f) &= \mathcal {F}\left\lbrace \tilde{s}_{\mathrm{B},k,n_{\mathrm{A}},n_{\mathrm{F}}}^{1\to 2/2\to 1}(t)\right\rbrace \tag{47}\\
S_{\mathrm{B},k,n_{\mathrm{A}},n_{\mathrm{F}}}^{2\to 2}(f) &= \mathcal {F}\left\lbrace s_{\mathrm{B},k,n_{\mathrm{A}},n_{\mathrm{F}}}^{2\to 2}(t)\right\rbrace. \tag{48}
\end{align*}
\begin{align*}
d_{\text{hyp},k,n_{\mathrm{A}},n_{\mathrm{F}}}^{i\to j}(x,y) =& \left\Vert \vec{p}(x,y) - \vec{p}_{\text{TX},i,k,n_{\mathrm{F}}}\right\Vert \\
&+ \left\Vert \vec{p}_{\text{RX},j,k,n_{\mathrm{A}},n_{\mathrm{F}}} - \vec{p}(x,y)\right\Vert, \tag{49}
\end{align*}
\begin{equation*}
f_{\mathrm{B},k,n_{\mathrm{A}},n_{\mathrm{F}}}^{i\to j}(x,y) = -\mu \frac{d_{\text{hyp},k,n_{\mathrm{A}},n_{\mathrm{F}}}^{i\to j}(x,y)}{\mathrm{c}_{0}}. \tag{50}
\end{equation*}
\begin{align*}
I_{\text{SAR},n_{\mathrm{A}}}^{i\to j} (x,y) =& \sum _{n_{\mathrm{F}}=0}^{N_{\mathrm{F}}-1}\sum _{k=0}^{N_{\text{ch}}-1} S_{\mathrm{B},k,n_{\mathrm{A}},n_{\mathrm{F}}}^{i\to j}(f_{\mathrm{B},k,n_{\mathrm{A}},n_{\mathrm{F}}}^{i\to j}(x,y))\\
& \cdot \mathrm{e}^{2\pi \mathrm{j}f_{\mathrm{c}}\frac{d_{\text{hyp},k,n_{\mathrm{A}},n_{\mathrm{F}}}^{i\to j}(x,y)}{\mathrm{c}_{0}}}, \tag{51}
\end{align*}
C. Bistatic PSF
First, an investigation of the bistatic near-field PSFs is conducted to validate the performance of the trajectory estimation and synchronization by comparing the images to the ideal simulations. Thus, the scenario shown in Fig. 2 is adopted for this measurement. Fig. 8 shows a picture of the laboratory setup. To obtain a point-like target, a small metal rod with a diameter of 2.0cm is used. The results of these measurements are illustrated in Fig. 10. The extend of the PSFs is comparable to the ideal simulation results. The peak positions are slightly shifted from the actual target positions by a few centimeters due to residual localization and synchronization errors, which is acceptable for most applications. A significant difference between the simulation and the measurement can be seen in the image of the bottom right target (number 5), where a lot of signal energy is spread over the bottom right corner. This effect is due to the direct path interference (DPI), but it can be suppressed by advanced DPI suppression techniques [47].
Bistatic SISO SAR images
D. Complex Target Scene
After validating the performance of the direct path processing to have a sufficient quality of coherence in time and space, the differences in the imaging performance of complex targets between monostatic and bistatic SAR recordings are investigated. Due to the results of Section IV, constellations where radar 2 moves towards radar 1 where chosen, as this increases the localization performance. To increase the signal-to-noise ratio (SNR), all RX channels of the receiving radar are summed up coherently. The resulting SIMO SAR images are calculated as
\begin{equation*}
I_{\text{SAR}}^{i\to j} (x,y) = \sum _{n_{\mathrm{A}}=0}^{N_{\text{RX}}-1} I_{\text{SAR},n_{\mathrm{A}}}^{i\to j} (x,y). \tag{52}
\end{equation*}
Bistatic outdoor SAR imaging scene with a bicycle and two scooters (a). The resulting bistatic (b) and monostatic (c) images are plotted together with the radar positions.
SAR measurement scene of a person in front of the bistatic constellation (a) with the bistatic (b) and monostatic (c) images.
Conclusion
This paper presents a precise bistatic self-localization scheme, utilizing the direct path signals between the two involved radar nodes and the processing chain, which is required to facilitate bistatic and solely self-localization based SAR imaging with uncoupled radars. In our theoretical framework, we note the difference between monostatic and bistatic SAR imaging as well as effects resulting from the near-field character of the investigated setups. The localization algorithm, based on a particle filter, is presented in detail. Its derived implementation enables the use of absolute phase measurements, as the PF is suited for multimodal measurement functions and is therefore capable of handling phase ambiguities. Measurement evaluations of the localization algorithm demonstrate an precision of better than 2cm in total for the positioning estimates which is comparable to the EKF approach. In particular, the radial components of position and velocity can be estimated very precisely due to the displacement information within the absolute phases. Therefore, the estimation performance improves significantly for SAR constellations where the radar nodes move towards each other. Subsequently, based on the linear extrapolated positioning results after the last filter iteration and with the use of the synchronization scheme introduced in our previous work, monostatic and bistatic SAR imaging over 1.5m-long trajectories was demonstrated. With linear trajectories, the spatial and temporal coherence of the bistatic dataset was sufficient for adequate focusing of the images without applying any autofocusing techniques. The measurement results of this SAR processing further indicate the benefit of multiperspective imaging, as the bistatic images yielded additional information about complex targets in the scene. Based on this first bistatic SAR experiments with non-static uncoupled radar nodes, the proposed processing scheme can be tested with more complex scenarios, e.g. with curved apertures or movement of both nodes, in the future work. The proposed setup and algorithm operate in a two-dimensional space, which is sufficient for adoption in automotive scenarios. However, transferring the approach to an UAV-based multistatic radar remote sensing system will require an extension of the hardware and algorithms to support three-dimensional pose estimation, which will also be the subject of future work.