- Original Article
- Open access
- Published:
Tightly coupled VLP/INS integrated navigation by inclination estimation and blockage handling
Satellite Navigation volume 6, Article number: 7 (2025)
Abstract
Visible Light Positioning (VLP) has emerged as a promising technology capable of delivering indoor localization with high accuracy. In VLP systems that use PhotoDiodes (PDs) as light receivers, the Received Signal Strength (RSS) is affected by the incidence angle of light, making the inclination of PDs a critical parameter in the positioning model. Currently, most studies assume the inclination to be constant, limiting the applications and positioning accuracy. Additionally, light blockages may severely interfere with the RSS measurements but the literature has not explored blockage detection in real-world experiments. To address these problems, we propose a tightly coupled VLP/INS (Inertial Navigation System) integrated navigation system that uses graph optimization to account for varying PhotoDiode (PD) inclinations and VLP blockages. We also propose to simultaneously estimating the robot’s pose and the locations of some unknown Light-Emitting Diodes (LEDs). Simulations and two groups of real-world experiments demonstrate the efficiency of our approach. Despite inclination changes and blockages, one group achieved an average positioning accuracy of 10 cm during movement, inclination accuracy within 1 degree, and a 100% blockage detection success rate, while the other group achieved an average accuracy of 11.5 cm, showing the effectiveness and robustness for VLP-based localization applications.
Introduction
Due to the increasing need for Location-Based Service (LBS), Visible Light Positioning (VLP) has attracted great academic and commercial interests. VLP utilizes Light-Emitting Diodes (LEDs) to emit light signals and shows great potential for Indoor Positioning Systems (IPSs) since LEDs have many important features, such as high bandwidth, high energy efficiency, long lifetime, and low infrastructure cost (Zhuang et al., 2018). Compared with widely used IPS methods nowadays such as WiFi, Bluetooth, Ultra-WideBand (UWB), odometer, vision, and LiDAR, VLP shows multiple advantages (Zhuang et al., 2018; El-Sheimy & Li, 2021) at the same time: high accuracy, low infrastructure and computational cost, and easy deployment, as listed in Table 1.
Among all the literature using LEDs for IPSs so far, PhotoDiodes (PDs) and cameras are the main receivers used in the VLP system. A PD converts the received light signals into current and outputs the luminance by measuring the voltage. In PD-based VLP systems, Received Signal Strength (RSS) (Sun et al., 2022) is the most direct measurement that can be easily obtained by a single PD. By a combination of multiple PDs, Angle of Arrival (AOA) measurements are also available (Zhu et al., 2019; Aparicio-Esteve et al., 2021). With additional devices, a PD can also measure Time of Arrival (TOA) (Keskin & Gezici, 2016) and Time Difference of Arrival (TDOA) (Du et al., 2018) for positioning. Image-based VLP systems utilize imaging geometry to solve the cameras’ position by taking photos of LEDs. Since cameras are commonly equipped sensors, image-based VLP is also used in both consumer products and industry (Guan et al., 2022). A majority of VLP systems (Zhuang et al., 2018) use a PD as the receiver since the cost of a single PD is significantly lower. This paper focuses mainly on PD-based systems.
In general, PD-based positioning systems use the Lambertian radiation model to estimate the receiver’s position. As Lambertian radiation is defined by the distance of light transmission, incident, and irradiation angles, positioning is difficult when the inclination of the PD varies. To simplify the positioning problem, most experimental (Sun et al., 2022; Hua et al., 2021; Li et al., 2014, 2017) and part of simulated (Zhang et al., 2020; Jung et al., 2014) PD-based research assumed the receiver is levelly placed. The research (Yang et al., 2014; Zhu et al., 2019; Kim et al., 2019; Cai et al., 2017) considered the tilt angle in the positioning model, but the tilt angles were known values. However, 3D positioning and inclination estimation are necessary for some users, e.g., pedestrians on the stairs, legged robots, and indoor UAVs.
To cope with the unknown inclinations of the receiver, plenty of works (Zou et al., 2017; Yasir et al., 2014; Wang et al., 2018; Sheikholeslami et al., 2021) used Inertial Measurement Unit (IMU) sensors (including accelerometer and gyroscope) to estimate the inclination. Zou et al. (2017) used Inertial Navigation System (INS) that fused gyroscope and accelerometer data to solve the tilt angle. However, their system wrongly assumed that the tilt axis was perpendicular to the Line of Sight (LOS) and reduced the geometric problem to a two-dimensional one. The works (Yasir et al., 2014; Wang et al., 2018) used an accelerometer while Sheikholeslami et al. (2021) used the integration of a gyroscope’s data to estimate the inclination of the receiver; additionally, Wang et al. (2018) used a magnetometer to estimate the heading angle. However, the inclinations estimated from these methods may suffer from IMU’s bias errors. Also, these positioning methods are not robust when facing Non-Line-of-Sight (NLOS) situations (e.g. light blockages). Additionally, Zhou et al. (2018, 2019) used convex optimization to simultaneously estimate the receiver’s location and orientation, but their simulation experiments show the demand for a sufficient number (at least 20) of LEDs in the Field of View (FOV), which is not applicable in real applications.
Visible light cannot penetrate opaque objects, thus, LOS blockages can lead to wrong RSS measurements and severely damage the positioning system (Sun et al., 2022; Vuong et al., 2022; Yang et al., 2020; Zhang et al., 2020; Singh et al., 2024; Gong et al., 2022). Literature has discussed the theoretical calculation (Tang et al., 2021; Hosseinianfar & Brandt-Pearce, 2020; Singh et al., 2024) and the solution (Sheikholeslami et al., 2021; Vuong et al., 2022; Yang et al., 2020; Zhang et al., 2020; Sun et al., 2022; Gong et al., 2022) of the blockages. Tang et al. (2021) gave a calculation on shadow area caused by the blockage of a cylinder. Hosseinianfar and Brandt-Pearce (Hosseinianfar & Brandt-Pearce, 2020) proposed an optimization method to solve the location of a pedestrian who blocks the LOS signal by modeling the person as a cylinder. Sheikholeslami et al. (2021) used a Convolutional Neural Network (CNN), along with inertial sensors, to predict the locations during blockages in simulation. Vuong et al. (2022) combined trilateration-based RSS with NLOS-based fingerprinting model to deal with the potentially blocked situation. Yang et al. (2020) used Pedestrian Dead Reckoning (PDR) to estimate the velocity and predicted locations during blockages. Isam Younus et al. (2021) discussed the impact of blocking on received power distribution and positioning but did not give a solution. Zhang et al. (2020) proposed a partial-RSS-assisted inertial navigation system with a Recurrent Neural Network (RNN), but both their VLP and inertial model were simplified to a 2-Degree-of-Freedom (2DoF) situation. Our previous work (Sun et al., 2022) used a robust graph optimization method to smooth the RSS data and resist the errors caused by short-time blockage but did not detect them. To our knowledge, there are no studies with real test data about blockage handling using RSS.
To improve the robustness of the PD-based VLP system, it is a reasonable choice to be aided with INS. Tightly coupled, relative to the term loosely coupled, is a concept originated from the field of GNSS/INS integrated navigation (Groves, 2008; Li et al., 2021; Zhuang et al., 2023). Figure 1 shows differences between tightly (Zhang et al., 2020; Kim et al., 2022) and loosely (Li et al., 2017; Zou et al., 2017; Li et al., 2023) coupled VLP/INS used in previous studies. Both tightly and loosely coupled systems take advantage of the IMU but tightly coupled models can better resist IMU errors and can output continuous navigation results even when blockages happen. The research (Zhang et al., 2020) is tightly designed to cope with light blockages but its system is 2-dimensional by assuming the receiver to be levelly placed. The work (Zou et al., 2017) fuses PD’s positions with INS. The works (Li et al., 2017; Kim et al., 2022; Li et al., 2023) fuse 2-dimensional PDR rather than 3-dimensionall INS with VLP, in which Kim et al. (2022) used a IMU to determine the inclinations. Additionally, the work (Zhuang et al., 2024) simply fuses INS with a smartphone’s ambient light sensor. Although many camera-based systems (e.g. Liang et al. (2019)) discuss the VLP/INS fusion, to the best of our knowledge, there is no 6DoF tightly coupled VLP/INS system that used PDs to solve the problems of both inclination estimation and blockage handling.
In this work, we propose a tightly coupled VLP/INS integrated navigation system that solves the problem of attitude (including inclination and heading) estimation and signal blockages. Compared with the works (Zou et al., 2017; Yasir et al., 2014; Wang et al., 2018; Sheikholeslami et al., 2021) using IMU sensors alone to estimate inclinations, in our work, both light and IMU measurements contribute to attitude estimation to improve the robustness. The contributions are as follows:
-
We propose a 6DoF tightly coupled VLP/INS system to solve the pose (position and attitude) of PDs. The observability of the state using RSS measurements is also studied.
-
To eliminate the LOS blockage, we propose a blockage detection method to pick out the interfered RSS measurements and evaluate it with real testbed data.
-
We propose a method to simultaneously estimate PD’s pose and some unknown LEDs’ locations.
We organize the rest of this article as follows. “Tightly Coupled VLP/INS Integrated Navigation System and Methods” section introduces the basic structure of our system, defines the coordinate systems, and presents methodology, including the quaternion-based Lambertian model, weight determination, blockage detection, the IMU pre-integration, graph optimization model, and a method to estimate unknown LED parameters. “Results and Analysis” section presents the simulation and experiments, including the experimental setup for VLP/INS integrated navigation, the experimental results, analysis, and discussions. The conclusions are presented in “Conclusion” section.
Tightly coupled VLP/INS integrated navigation system and methods
System overview
The structure of the proposed tightly coupled VLP/INS integrated navigation system is shown in Fig. 2. Different from the traditional concept of tightly coupled VLP/INS fusion system as Fig. 1 shows, we add a blockage detection module to improve the accuracy and robustness. Figure 3 shows the simplified application scene of our system, where LEDs are installed on the ceiling of a room and the receiver is moving on the floor. For this scene, we have the assumption that the height of a mobile robot is constant when it is operating on a fixed floor.
Compared to loosely coupled schemes, as illustrated in Fig. 1, tightly coupled VLP/INS integrated navigation systems demonstrate greater resilience when addressing issues of inclination and signal blockage. In a tightly coupled system, inclinations are estimated using measurements from both the IMU and VLP, whereas in a loosely coupled system, they rely solely on IMU measurements. Under conditions of severe blockage, a loosely coupled VLP/INS system may diverge due to the VLP module’s inability to function without sufficient measurements. In contrast, a tightly coupled system can maintain stability, as even a limited number of measurements help constrain divergence.
To start with, we present the notations used in this article. We define \((\cdot )^u\) as the indoor frame, whose origin is at the corner of a room, and axes are based on the orientation of the walls (drawn in red in Fig. 3). \((\cdot )^b\) is the body frame, which we define based on the position and orientation of the IMU (drawn in blue in Fig. 3). \(b_k\) means the body frame at time \(t_k\). \((\cdot )^v\) is the VLP frame, which we define based on the surface of the PD and the vehicle’s travel direction (forward-right-down, drawn in green in Fig. 3). \(v_k\) means the VLP frame at time \(t_k\).
Quaternion-based Lambertian model
In this paper, we consider a system containing N LEDs and a single-PD-based VLP receiver. A PD can collect optical signals simultaneously from several LEDs, forming a time series of amplitudes. To obtain RSS values, we distinguish optical signals from each LED based on its known modulated frequency. To this end, we adopt the Discrete Fourier Transform (DFT) to separate LED signals in the frequency domain and obtain the l-th LED’s RSS value \(P_{l}\) by measuring the amplitude peak. Within the FOV of LEDs and the PD, the received optical power can be modeled with the Lambertian law (Kahn & Barry, 1997):
where \(D_l\) is the distance between the l-th LED and the PD, AR is the effective area of the PD, \(P_{\rm{T}l}\) is the optical power of the l-th LED, \(\theta\) is the angle of irradiance from the l-th LED, \(\psi\) is the angle of incidence at the receiver, as shown in Fig. 4. \(T_s(\psi )\) is the gain of an optical filter, and \(g(\psi )\) is the gain of an optical concentrator placed in front of the detector; in usual applications, \(T_s(\psi )=g(\psi )=1\). \(m_l\) is the Lambertian order of the lth LED chip.
To calculate the cosine functions, we use the dot product of a unit vector and a LOS vector,
where \(\varvec{n}^u\) is the normal of the receiver plane, \(\varvec{n}_{l}^u\) is the unit vector indicating the opposite radiating direction of the l-th LED. Both \(\varvec{n}^u\) and \(\varvec{n}_{l}^u\) are taken the upward direction. For indoor positioning, LEDs are usually installed on top of the ceiling, \(\varvec{n}_{l}^u=[0,0,1]\). \(\varvec{D}_l^u\) is the three-dimensional vector indicating the direction from the PD to the l-th LED. Thus, the equation (1) can be refactored to be as,
In this paper, we use rotation matrices \(\varvec{C}\) and Hamilton quaternion \(\varvec{q}\) to represent the attitude. \(\varvec{n}^u\) is determined based on the receiver’s attitude,
where the rotation matrix \(\varvec{C}^u_{v}\) is the Direction Cosine Matrix (DCM) converting from v-frame to u-frame, \(\varvec{n}^{v}=[0,0,1]\). To express \(\varvec{n}\) by a quaternion, for \(\varvec{q}^u_{v}=q_0+q_1\varvec{i}+q_2\varvec{j}+q_3\varvec{k}\), we can transfer \(\varvec{q}^u_{v}\) to \(\varvec{C}^u_{v}\) based on the quaternion theory and have the following expression:
Disturbance model and observability
RSS measurements are relevant with both positions and attitudes, but not all state components are observable and worth estimation. To make it clear for this situation, we take full derivatives of equation (5) with position disturbance \(\rm{d}\varvec{r}\) and angle disturbance \(\rm{d}\varvec{\phi }\),
the deducing process is given in the Appendix.
The equation (8) illustrates how position and attitude are influenced by RSS errors, which can be caused by light blockages, multi-path, ambient light, and noises (Zhuang et al., 2018; Yang et al., 2024). Indoor mobile robots usually move on a planar floor, therefore, we simplify (8) to 2D situations. To make the optimizer concentrate more on planar positions, we fix the height of the PD during the optimization with RSS measurements, hence 3-dimensional \(\rm{d}\varvec{r}\) is reduced to 2-dimensional \(\rm{d}\varvec{s}\). Since \(\varvec{n}_{l}^u=[0,0,1]\), equation (8) can be reduced to
where \(\left( \varvec{n}^u\right) _{xy}\) is the x and y component of \(\varvec{n}^u\), \(\varvec{s}_l^u\) and \(\varvec{s}^u\) are the x and y components of the positions of the lth LED and the PD. In the equation (9), the height of a PD is unconstrained since \(\rm{d}\varvec{s}\) is a planar position. To prevent height estimation from divergence, we design height constraints, which is demonstrated in the subsequent subsection.
Apart from height, the heading angle also needs to be dealt with specially. According to the Appendix, \(\rm{d}\varvec{\phi }\) is the attitude disturbance expressed in the u-frame. Since \((\varvec{D}_l^u\times \varvec{n}^u) \cdot \varvec{n}^u=0\) permanently, the heading angle is unobservable for a single VLP RSS measurement. There is an intuitive explanation for this phenomenon: when we rotate the plane of a PD, the RSS remains unchanged. Therefore, VLP-alone positioning can hardly solve the heading; other sensors’ aiding (e.g. inertial sensors) is necessary. Additionally, for an IMU that does not provide magnetic measurements, the initial heading needs to be determined by external equipment.
Blockage detection
Since visible light cannot penetrate opaque objects, light blockages can severely damage the RSS measurements. To prevent this from happening, blockage detection is necessary to exclude bad observations.
Suppose there is a reflection-free and interference-free environment, the RSS should be zero when blockages happen. However, in real applications, the RSS is calculated based on Discrete Fourier Transform (DFT) in a window of a certain data length, which can be explained by Fig. 5. Taking one-second voltage data, for example, if the blockages occupy \(0.5\ \rm{s}\) in this DFT window, the RSS measurement may descend to be half of the normal one, but not zero. Thus, we cannot simply detect blockages by finding zero measurements. In this paper, we proposed a Descending-Rising Detection (DRD) method to deal with raw PD observations.
Firstly, we introduce the basic principle: when blockages happen, RSS always suffers a severe descent; when blockages end, RSS rapidly rises to a normal level. The process is shown in Fig. 6. In this algorithm, we use the RSS changing rate ratio threshold to separate blocked signal from unblocked one and use a blockage detection tag to indicate whether blockages are ongoing. To approximate the RSS changing rate ratio using discrete RSS data, it is more accurate to difference the high-frequency sampling (\(\>100\ \rm{Hz}\)) RSS data,
The critical part lies in the threshold to judge the RSS changing rate ratio. For 3D positioning applications, based on equation (8), we have
where \(\omega _{{\max }}\) and \(v_{{\max }}\) are the maximum possible angular rate and velocity determined by vehicle state at that moment. The right side of the inequality (11) can be used for the threshold. In a 2D situation when there is no attitude variations, inequality (11) can be reduced to
where s and h are the horizontal and vertical distances between an LED and a PD. To accurately get the time derivatives of RSS \(P^{'}(t)\), we need to differentiate the RSS measurements with a high sampling rate, e.g., in our experiments, 120 Hz.
Blocked RSS measurements can be either excluded or significantly down-weighted when constructing the optimization cost function. The down-weighting strategy is demonstrated in the subsequent subsection.
IMU Pre-integration and residuals
To connect neighboring states, we integrate IMU measurements during a certain time interval using the pre-integration method. Since the sampling rate of VLP measurements is always much lower than that of IMU, IMU pre-integration is a useful tool to make the IMU residuals and VLP residuals aligned in time. The commonly used IMU pre-integration model (Qin et al., 2018) integrates accelerometer and gyroscope measurements within the time interval \(\left[ t_k,t_{k+1}\right]\) in the body frame. In our VLP/INS integration scheme, we compute all states in the VLP frame. To realize this, we use the DCM \(\varvec{C}_b^v\) to transform the ith acceleration \(\hat{\varvec{a}}_i^b\), angular rate \(\hat{\varvec{\omega }}_i^b\) measurements and their biases from b-frame to v-frame, and then calculate the pre-integration vector \(\hat{\varvec{z}}_{v_{k+1}}^{v_{k}}=\left[ \hat{\varvec{\alpha }}_{v_{k+1}}^{v_{k}},\hat{\varvec{\beta }}_{v_{k+1}}^{v_{k}},\hat{\varvec{\gamma }}_{v_{k+1}}^{v_{k}}\right]\), where \(\hat{\varvec{\alpha }}_{v_{k+1}}^{v_{k}}\), \(\hat{\varvec{\beta }}_{v_{k+1}}^{v_{k}}\), and \(\hat{\varvec{\gamma }}_{v_{k+1}}^{v_{k}}\) are the pre-integrations of position, velocity, and attitude, respetively. The DCM \(\varvec{C}_b^v\) can be determined by the installation angles of the IMU and PD.
The relationship among time stamps is drawn in Fig. 7. In each time interval \([t_i,t_{i+1}]\), there are one accelerometer and one gyroscope measurements. Approximately, we assume the reference frame on each time \(t_i\) to be the VLP frame \(v_k\). The equations (13) present the propagation formula to calculate step by step (Qin et al., 2018).
where \(\updelta t\) is the time difference between \(t_i\) and \(t_{i+1}\), \(\varvec{b}_{a_i}\) and \(\varvec{b}_{g_i}\) are IMU biases, \(\varvec{C}(\cdot )\) is calculated based on quaternions.
In our tightly coupled RSS-based VLP/INS integrated navigation system, we choose a 16-dimensional vector to form the state,
Using the states at time \(t_k\) and \(t_{k+1}\), the residual for pre-integrated IMU measurement can be defined based on the pre-integration \(\hat{\varvec{z}}_{b_{k+1}}^{b_{k}}\) and states \(\varvec{{\mathcal {X}}}=\left[ \varvec{x}_0,\varvec{x}_1,\ldots ,\varvec{x}_{n} \right]\) within the sliding window (Qin et al., 2018):
where \(\varvec{C}^{v_k}_u\) is the DCM matrix to transform vectors from u-frame to v-frame, \(\varvec{g}^u\) is the gravity vector in u-frame, \(\Delta t_k\) is the time difference of the time interval \(\left[ t_k,t_{k+1}\right]\), \([\cdot ]_{xyz}\) means the vector constructed by the imaginary part of a quaternion. The covariance of \(\varvec{r}_{{\mathcal {B}}}\left( \cdot \right)\) is propagated by:
until the time \(t_{i+1}\) is equal to \(t_{k+1}\). The calculation of matrices \(\varvec{\varPhi }_{i+1}\) and \(\varvec{Q}_{i+1}\) can be referred to Tang et al. (2022).
Graph optimization for VLP/INS integrated navigation
As introduced in “Introduction” section, our tightly coupled VLP/INS navigation method fused IMU measurements with VLP RSS observations, rather than VLP positioning results. To find a globally optimal navigation solution, we used a sliding window to optimize IMU and RSS measurements within a time interval of a suitable length, which is demonstrated in Fig. 8. In real-time applications, the sliding window can slide to include the latest measurements and output the latest navigation results. The overall algorithm is demonstrated in Algorithm 1. Similar to the pre-integration factor, we calculate the VLP residual based on measurements \({\hat{\varvec{P}}}_l^k\) and states \(\varvec{{\mathcal {X}}}\):
where \(\varvec{D}_l^u\), \(\varvec{n}^u\), and \(\varvec{D}_l\) are determined by the states \(\varvec{{\mathcal {X}}}\), which is stated in the previous subsection. When computing \(\varvec{D}_l^u\), the state \(\varvec{p}^u\) needs to be corrected by the lever-arm since the PD and the IMU are not at the same place:
where \(\varvec{\ell }^{b}\) is the lever-arm from the IMU center to the PD center in the b-frame. Under the condition when the relative pose between the PD and the IMU is fixed, \(\varvec{C}^{v}_b\) is constant. But the corresponding quaternion of \(\varvec{C}^{u}_v\) has been defined as a part of the state, therefore, the lever-arm contribution to its disturbance needs to be taken into account. We have (Chen et al., 2021):
The definition of antisymmetric matrix is referred to as Solà (2017).
The disturbance model (9) provides partial derivatives of RSS on position and attitude, where the attitude is expressed in the form of Euler angle. To suit the state expressed with quaternions, we have the differentiation (Tang et al., 2022)Solà (2017):
In the state optimization process, it is a common practice to optimize a quaternion’s minimal space, i.e., a 3D angle vector (Tang et al., 2022; Qin et al., 2018), and then update quaternions with manifold-form multiplier (20). Based on the equation (9), the Jacobian component of an RSS to a quaternion can be expressed as:
Based on (8), (19) and (21), the Jacobian matrix for one VLP residual is calculated as follows:
As for 2D situations, the Jacobian matrix can be calculated based on (9) and (21). In real applications, \(\varvec{H}\) is a \(N\times 16\) matrix, where N is the number of LEDs at each time stamp.
We minimize the sum of the Mahalanobis norm of all measurement residuals to obtain a maximum posterior estimation:
where \(\varvec{r}_{{\mathcal {B}}}\left( \cdot \right)\) and \(\varvec{r}_{{\mathcal {V}}}\left( \cdot \right)\) are calculated by equations (15) and (17), \(\varSigma _l\) is the covariance of lth LED’s RSS, for blocked measurements and signals out of FOV, its covariance can be set as a large value; in our test, we empirically set it as 99. \(\varvec{r}_{{\mathcal {C}}}\left( \cdot \right)\) is the constraint that can be Non-Holonomic Constraints (NHC) (Zhuang & El-Sheimy, 2016) or height constraints. In the 2D positioning model, height is fixed and not observable in the VLP system (mentioned in “Disturbance Model and Observability” subsection), thus, it needs to be constrained as:
where \(h_{\rm{PD}}\) is the height of the PD that should be measured in advance. NHC is effective for a robot without lateral velocity while height constraint is effective in 2D positioning situations. In our simulation (3D), NHC is used, while in physical experiments (2D), both NHC and height constraints are used.
To obtain the optimal state vector \(\varvec{{\mathcal {X}}}\), we use the Levenberg-Marquardt algorithm (Yu et al., 2018). In real-time applications, we should output the last state \(\varvec{x}_{n}\) within the sliding window.
Unknown LED parameters
Apart from pose graph optimization, it is feasible to simultaneously estimate the poses and the parameters of some unknown LEDs in our graph optimization system. Since these parameters remain constant throughout the whole measurements, setting these parameters to be unknown will not add too much burden to the whole optimization problem. When some LEDs’ locations are unknown, the state \(\varvec{{\mathcal {X}}}\) need to be extended to \(\left[ \varvec{{\mathcal {X}}}, \varvec{{\mathcal {P}}}\right]\), where \(\varvec{{\mathcal {P}}}\) is the set of LED locations that needs estimation; thereby the VLP residual function should be changed to \(\varvec{r}_{{\mathcal {V}}}\left( {\hat{P}}_l^k, \varvec{{\mathcal {X}}}, \varvec{{\mathcal {P}}}\right)\). To optimize the LED locations, we calculated the Jacobian matrix of \(\varvec{r}_{{\mathcal {V}}}\) over \(\varvec{{\mathcal {P}}}\) based on equation (17):
The precision of LED locations depends partly on the motion of a robot. In a situation when the location of one LED is unknown, the discrete positions of a moving PD can be regarded as signal transmitters and the unknown LED can be regarded as a receiver. On the condition the PD-LED direction changes dramatically, the LED positions can be accurately estimated; otherwise, it is difficult to solve them. This situation can be quantified by a concept, Dilution of Precision (DOP) (Santerre et al., 2017), which originated from the Loran-C navigation system and the GNSS. In our graph optimization scheme, a position sequence within a sliding window can be used to locate the unknown LEDs. In this model, the precision of the LED position is determined by the trajectory’s precision and the DOP value.
The idea of geometric DOP is to state how errors in the measurement will affect the final state estimation. In a trilateration problem, when the known points are far apart, the geometry is strong and the DOP value is low (shown in Fig. 9). The mathematical expression of DOP can be referred to in Santerre et al. (2017). In “Estimating Unknown LEDs” subsection, we will present some test results where the locations of some LEDs are unknown.
When a user trajectory is used to locate an unknown LED, the localization precision is determined by the trajectory’s precision and the DOP value. Scattered trajectory points have strong geometry and a low DOP value, while aggregated points have a large DOP value. If two trajectories have the same precision, one with a low DOP value will lead to high localization precision of the unknown LED
Results and analysis
Experiment and simulation
We implemented both simulation and real-world experiments to test our tightly coupled VLP/INS graph optimization system: the simulation is to exclude indoor errors and evaluate 3D positioning performance while real experiments are to estimate 2D positions and inclinations. The real-world experiments are implemented in two sites, both are empty rooms, one with the size of \(3.8\ \rm{m} \times 6.3\ \rm{m} \times 2.8\ \rm{m}\) and the other is \(6\ \rm{m} \times 8.4\ \rm{m} \times 2.8\ \rm{m}\), which are marked as experiment A and B, respectively.
Experiment A is an ablation study to evaluate the 2D positioning and inclination estimation accuracy by strictly controlling the inclinations and blockages during each test. We implemented several field vehicular tests with five LEDs (CREE XLamp XM-L) mounted on the ceiling as the light beacons with coordinates of (0.35, 1.34), (3.56, 1.15), (1.71, 3.31), (3.50, 6.25), (0.35, 5.97), respectively (Unit: m, shown in Fig. 10(a)). Each lamp was modulated at a different frequency and controlled by the STM32 Microcontroller Units (MCU). The LEDs were modulated at 1.8 kHz, 2.5 kHz, 3.2 kHz, 3.75 kHz, and 5 kHz, respectively, and the whole transmit power was 10 W. To prevent multipath effects that can severely damage the RSS measurements, we used several pieces of black cloth to block the white walls.
The vehicle we used was a mobile robot typed DJI RoboMaster EP. The light sensor, a single OPT101 PD, together with an IMU, was mounted on the rotatable gimbal of the indoor mobile robot, shown in Fig. 10(b) and 10(c). The IMU is typed Honeywell HGUIDE i300, whose performance parameters are listed in Table 2. The bandwidth (i.e., the cutoff frequency for sampling) of the PD is 14 kHz and the responsivity is \(0.45\ \rm{A}/\rm{W}\). An STM32 MCU equipped with a 12-bit ADC is programmed to make the PD work (shown in Fig. 11), thus, the illuminance is sampled as an integer in the range of 0 to 4095. The MCU and an IMU were controlled by a Raspberry PI 4B, a credit-card-sized computer whose OS is based on Linux. Our IMU and VLP systems are synchronized based on timestamps given by the Raspberry PI. To vary the inclination of the PD, the gimbal can be rotated to change its pitch angle (shown in Fig. 10(b)). To test the performance on a fixed track, we called the RoboMaster Software Development Kit (SDK) Application Program Interface (API) using a PC to control motors based on a fixed program and to subscribe to the robot position, which served as a reference for further accuracy evaluation.
To verify the suitability of using the subscribed position for the ground truth, we did 3 groups of rectangular and 2 groups of circular loop closure tests to evaluate the accuracy of robot positions. The lengths of these trajectories are close to the lengths in experiment A. In each test, we measured the displacement between the start and end points of the trajectory; then we compared this displacement to that given by our subscribed positions. Rectangular tests show errors of \(3.2\ \rm{cm}\), \(3.9\ \rm{cm}\), and \(2.3\ \rm{cm}\) and circular tests show errors of \(1.2\ \rm{cm}\) and \(1.3\ \rm{cm}\). Since the RoboMaster EP itself was equipped with an odometer and a camera, which are both relative positioning sources (Zhuang et al., 2023), the position errors must be accumulating during motion. Thus, the averaging positioning error must be lower than the loop closure error, which is \(2.4\ \rm{cm}\) by our tests.
Experiment B is to evaluate the performance in a large space and to test the feasibility of solve LED and PD positions simultaneously. To our knowledge, this is the largest VLP indoor test site using the range-based positioning algorithms in the VLP literature. The receiver and transmitters of the VLP system were the same in experiment A while the mobile robot and the IMU are different. To effectively cover the whole site, we installed 6 LED lamps of the same type as in experiment A. LED lamps are modulated at 1.8 kHz, 2.5 kHz, 3.2 kHz, 3.75 kHz, 4.35 kHz, and 4.75 kHz, and the modulation methods are the same as experiment A.
The mobile robot we used was an AgileX scout mini, on which a LiDAR (Velodyne VLP16) and an IMU (Xsens MTI-3) are equipped (shown in Fig. 12): the LiDAR provide ground truth positions by integrated with the IMU using the FAST-LIO2 (Xu et al., 2022) program. UWB and BLE are installed to form an indoor positioning platform. To test the performance of solving inclinations, we installed the PD and the IMU on a rotatable bracket (shown in Fig. 12). Figure 11 shows the hardware connections, where we used a Jetson Xavier Developer Kit to control the sensors and collect VLP, IMU, and LiDAR. All sensors are synchronized through the Internet. The IMU characteristics are presented in Table 2.
In our study, the Ceres solver (Agarwal et al., 2022) is used to solve the graph optimization problem. The initial positions are roughly calculated by VLP alone, and the initial velocities and inclinations are set as zero. In particular, initial heading angles are measured by our smartphone’s built-in compass, since “Disturbance Model and Observability” subsection has demonstrated they are unobservable for a single VLP RSS measurement.
We implemented a simulation test, experiment A (“Normal Tests without Inclination and Blockage”, “Tests with Inclination”, and “Tests with Blockages” subsections), and experiment B (“Large Scale Tests” and “Estimating Unknown LEDs” subsections). In experiment A, we ran two fixed trajectories (rounded rectangular and S-shape trajectory) three times each trajectory: (1) normal tests without inclination and blockage; (2) tests with inclination, and (3) tests with blockages. In experiment B, we test the inclination estimation and blockage detection on a large space and estimate unknown LEDs. The datasets of experiment A are open-sourced on GitHub.
Simulation
To test the correctness of the 3D pose estimation and blockage detection algorithm, we built a simulation environment with sloped floors to create height changes. The simulated robot moved uphill for a while and the PD suffered several times blockages. To make the data as realistic as possible, we added while and flicker noises to IMU data whose level is 5 times higher to Table 2 and white noise to VLP RSS data (1-\(\sigma\) strength of 0.1 lux). The size of the whole scenario is \(5\ \rm{m} \times 5\ \rm{m} \times 5\ \rm{m}\) and the start point is at (5, 0, 0) (m).
3D simulation results processed by our graph optimization system. a Tightly coupled integration results compared to the referenced trajectory, locations of LEDs are also plotted; b pitch angle solved by our system and its ground truth; c RSS measurements suffering from blockages; and d blockages detected by our DRD method
Results in Fig. 13 show that our system behaves well when estimating the inclination angle and detecting the blocked RSS measurements. The grayed boxes in Fig. 13(c) and (d) show the periods when blockages happened. Figure 13(d) shows the blockages judged by our DRD method; in each curve, an odd number in the y-axis means blockages while even numbers are LOS signals; these values are down-sampled to 1 Hz during the positioning process since RSS measurements are 1 Hz. The average 3D positioning error is \(6.2\ \rm{cm}\) and the average inclination error is \(0.08(^{\circ })\).
Normal Tests without Inclination and Blockage
Normal tests are control groups to compare with the subsequent inclination tests and blockage tests; in these tests, we compare our Tightly Coupled (TC) system with Loosely Coupled (LC) VLP/INS and VLP alone. The LC VLP/INS system is selected as the state-of-the-art (SOTA) approach, which adopt similar model as Zou et al. (2017) in our tests. As Fig. 1 shows, the LC system uses the integration of IMU data to determine the attitude and solves the positions using RSS data. Different from our TC system where raw RSS measurements are fused with IMU data, the LC system fuses the positions with IMU data. As another control group, the VLP-alone system does not use IMU data: in the test without inclinations, we assumed inclinations as zero and simplified the Lambertian model (5) as:
where \(h_l\) is the constant height difference between the lth LED and the PD. Distance \({D_l}\) can be directly derived from RSS measurement \(P_l\).
We used the positions given by DJI RoboMaster’s SDK API as the reference (red dotted curves in Fig. 14), whose sampling rate is 1 Hz. The startpoint, endpoint, and the initial moving direction are plotted on the graph. We also calculated the Cumulative Distribution Functions (CDFs, Fig. 14(c) and (d)) based on that reference. To evaluate the positioning errors, we calculated the distances between solved positions and referenced positions. Trajectories and CDFs show that the accuracy of the TC, LC, and VLP-alone results is similar.
Tests with Inclination
Our system optimizes positions and inclinations simultaneously within a graph optimization frame. To test our system’s ability to deal with inclination changes, we ran the same trajectories as “Normal Tests without Inclination and Blockage” section does but changed the gimbal’s inclination. In these inclination tests, we called an API function of our robot to rotate its gimbal to a pitch angle of \(10\ (^{\circ })\) during the movement periods. During the motion of our robot, rapid turnings made the inclination change; thus, pitch and roll angles were not fixed. However, the roll angles were small and could be assumed as zero. Due to the introduction of the inclination, the VLP-alone system should additionally solve pitch and heading angles, where we used the propagation model (5) and assumed:
where \(\theta _z\) is the heading, \(\theta _y\) is the pitch angle of the gimbal we set, \(x_l\) and \(y_l\) compose the 2D position difference. \(\theta _z\), \(\theta _y\), x, and y are variables to be optimized while \(h_l\) is the constant height difference. During movement, the true values of roll and pitch angles were hard to solve; but in the static phase, they were accurately measured: roll angles are zero, and pitch angle differences are shown in Table 3. The true values of heading angles during movement were calculated based on the 2D ground truth positions (red dot in Fig. 15(g) and (h)).
The TC, LC, and VLP-alone results are shown in Fig. 15. The accuracy of TC results in Fig. 15(a) in the lower left corner of the trajectory surpasses LC results due to the attitude errors accumulated from the gyroscope integration. In both Fig. 15(a) and (b), VLP-alone results deviate much from the ground truth since the VLP-alone system can hardly solve the attitudes as the IMU-aided TC and LC systems do. This conclusion can be drawn in Fig. 15(c)-(h): compared with the attitudes (pitches, rolls, and headings) solved by the TC system (green curves), those variables solved by the VLP-alone system (purple curves) are unreliable. Especially, heading angles (purple curves in Fig. 15(g) and (h)) are completely wrong, which is consistent with our conclusions drawn in “Disturbance Model and Observability” subsection that heading angles are not observable. In Fig. 15(i) and (j), we plot the 2D-error series calculated based on the referenced trajectory. We use gray boxes to mark the periods when the VLP-alone system performs worst. In these boxes, pitch angle errors are extremely large. Overall, the wrongly solved inclination angles lead to large positioning errors in the VLP-alone system.
Inclination tests’ results comparison among TC, LC integration, and VLP alone. The gray boxes indicate the periods when both pitch angle errors and position errors are large. a Rounded rectangular trajectory; b S-shape trajectory; c pitch angles in rounded rectangular trajectory; d pitch angles in S-shaped trajectory; e heading angles in rounded rectangular trajectory; f heading angles in S-shaped trajectory; g roll angles in rounded rectangular trajectory; h roll angles in S-shaped trajectory; i 2D errors in rounded rectangular trajectory; and j 2D errors in S-shaped trajectory
To further explore the inclinations’ accuracy, we compare them to the results of INS mechanization (Groves, 2008). Figure 15(c)-(f) shows the inclinations solved by our tightly coupled VLP/INS system (blue curves) and the INS mechanization results (red curves). Since the RoboMaster API cannot provide accurate gimbal attitudes as the referenced position does, we manually measured the two pitch angles before and after each kinematic test. The differences between them are used as a reference to verify the attitudes solved by our VLP/INS fusion system. Table 3 shows the attitudes of VLP/INS integration are more accurate than the pure INS. Due to the IMU bias, scale factors, and some random error, pure INS results gradually deviate from true inclination.
Tests with blockages
A key advantage of our VLP/INS system is the ability to deal with light blockages. Blockages lead to the loss of RSS measurements, therefore, VLP alone sometimes can hardly work out reliable solutions while the TC VLP/INS system can still work out smooth solutions with the dead reckoning of INS. To test our system’s ability to deal with blockages, we ran the same trajectories as “Normal Tests without Inclination and Blockage” section. In these tests, a pedestrian walked randomly in the room to block the visible light signals. Figure 16 shows two tests with blockages, where TC, LC, and VLP-alone results in Fig. 16(a) and (b) all adopt the blockage detection strategy (i.e., DRD) to exclude blocked measurements. Due to the severe damage of blockages, positions solved by VLP alone are not successive and contain large errors. Since the LC system adopts VLP-alone positioning results as inputs, large oscillations occur in the LC navigation trajectory (some even larger than 1 m, shown in Fig. 16(c) and (d)).
To validate the need for detecting blockages, we designed a control group using VLP data only (blue points in Fig. 16(a)-(d)) where we did not detect blockages to pretend all measurements are effective. Without the blockage detection strategy, positioning accuracy is extremely bad, especially for the parts in gray boxes.
To verify the effectiveness of our blockage detection method, we plot the RSS variations and detection results in Fig. 16(e)-(h). Figure 16(e) and (f) are the RSSs sampled with a rate of 120 Hz, which present some sudden reductions when blockages happen. Figure 16(g) and (h) are the blockages judged by our DRD method; in each curve, an odd number in the y-axis means blockages while even numbers are LOS signals; these values are down-sampled to 1 Hz during the positioning process since RSS measurements are 1 Hz. Results show that the detection accuracy is 100%.
Blockage tests’ results comparison among TC integration, LC integration, and VLP alone. The gray boxes indicate the periods when light blockages are severe. a Rounded rectangular trajectory; b S-shape trajectory; and (c), (d), their 2D errors; e the blocked RSS in rounded rectangular trajectory; f the blocked RSS in S-shaped trajectory; g the detection of blockages in rounded rectangular trajectory; and h the detection of blockages in S-shaped trajectory
In conclusion, Table 4 presents the accuracy comparison of three groups of tests discussed in the above three subsections. Our tightly coupled system gets an average accuracy of 9.6 cms even with NLOS and PD tilting.
Large scale tests
In this section, we test our system in a large space under blockages and inclinations. The trajectories, pitch angles, and CDFs are given in Fig. 17. In Fig. 17(b), we compare the pitch angles calculated by graph optimization and pure INS; the pitch angles calculated by VLP alone are not plotted since they are inaccurate. The initial and final pitch angles were measured accurately at \(16.5(^{\circ })\); but during movement, the pitch angle may fluctuate up and down this value. At the end of this trajectory, the pitch angle of TC VLP/INS deviated by \(0.2(^{\circ })\) but that angle calculated by the integration of the gyroscope deviated by \(1.6(^{\circ })\) (Fig. 17(b)). The CDFs in Fig. 17(c) compare the 2D positioning results of TC VLP/INS integration and pure VLP based on the ground truth calculated using LiDAR and IMU through the FAST-LIO2 program. Since VLP alone can hardly solve inclinations and headings (discussed in previous subsection), its accuracy is far worse than that of the integration results. We achieved a mean accuracy of 0.115 m and the largest error was 0.209 m, which is similar to experiment A. This result verifies the robustness in a large space on the situation of PD inclination and signal blockages.
Estimating unknown LEDs
In this section, we set some LEDs’ locations to be unknown and use “Large Scale Tests” subsection’s data to simultaneously estimate the trajectory and the unknown LED locations. Each LED’s initial location was randomly set and the sliding window size was set to be a large value (50 in our test). From Table 5, when only one LED’s location was unknown, the locations of LEDs 3, 4, 5, and 6 were accurately estimated during the optimization, while the locations of LEDs 1 and 2 were not. This phenomenon can be interpreted using our hypothesis in “Unknown LED parameters” subsection: based on Fig. 17, for example, LED 1’s DOP value is high since it is close to the edge of the trajectory, while LED 4’s DOP is relatively low. Tests with two unknown LEDs also validate this idea. But when three LEDs’ locations were unknown, the missing of some critical LEDs (3, 4, 5, or 6) may lead to the divergence of our optimizer. In the last group of Table 5, none of the three LEDs’ locations converged to stable values. However, several LEDs’ missing locations did not lead to a significant decrease in the trajectory’s accuracy. In general, the accuracy of estimated LED locations and that of the trajectory are positively correlated.
Further discussions
In addition to inclinations, blockages, and IMU biases discussed in this research, other errors and effects, including IMU noises (Groves, 2008), PD noises (Hua et al., 2018), multi-path (Huang et al., 2016), and ambient light (Yang et al., 2024), needs further investigation. PD noise is primarily white noise (as shown in the experiments of Hua et al. (2018)), and together with the IMU noise modeled in (16), a tightly coupled VLP/INS system can reduce these noise (Zhuang et al., 2023). Multipath interference arises from light reflections, and the conventional approach to addressing this involves modeling the environment and calculating the associated effects (Huang et al., 2016). High ambient light, such as sunlight, can induce a nonlinear response between the current and light intensity; to mitigate this, the method proposed in Yang et al. (2024) utilizes a ratio model.
Conclusion
In this paper, we proposed a tightly coupled VLP/INS integrated navigation system to solve the problem of dynamic inclination estimation and blockage elimination. Innovatively, our system analyzes the observability of the attitude using the disturbance model of Lambertian law and detects the blocked observations based on the RSS changing rate. We also prove the feasibility of simultaneously estimating PD’s pose and some unknown LEDs’ locations. To verify the effectiveness, we did simulations and implemented two groups of real-world experiments using mobile robots assembled with low-cost PD and IMUs. Under the condition of dynamic inclination changes and light blockages, our experiments achieve an overall accuracy of around 10 cm during movement and inclination accuracy within 1\((^{\circ })\). Due to the widespread use of LEDs and the miniaturization of PDs and their PCBs for control, our proposed system is suitable for consumer and industrial applications, especially for mobile robots, mechanical arms, wearable devices, etc.
Availability of data and materials
The source code of our VLP/INS fusion (loosely coupled and tightly coupled) system and part of our datasets will be available from the GitHub repository.
References
Agarwal. S., Mierle, K., & Team, T.C.S. (2022). Ceres Solver. https://github.com/ceres-solver/ceres-solver
Aparicio-Esteve, E., Hernandez, A., & Urena, J. (2021). Design, calibration, and evalution of a long-range 3-D infrared positioning system based on encoding techniques. IEEE Transactions on Instrumentation and Measurement, 70, 1–13. https://doi.org/10.1109/TIM.2021.3089223
Cai, Y., Guan, W., Wu, Y., Xie, C., Chen, Y., & Fang, L. (2017). Indoor high precision three-dimensional positioning system based on visible light communication using particle swarm optimization. IEEE Photonics Journal, 9(6), 1–20. https://doi.org/10.1109/JPHOT.2017.2771828
Chen, Q., Zhang, Q., Niu, X., & Liu, J. (2021). Semi-analytical assessment of the relative accuracy of the GNSS/INS in railway track irregularity measurements. Satellite Navigation, 2, 1–16.
Du, P., Zhang, S., Chen, C., Alphones, A., & Zhong, W. D. (2018). Demonstration of a low-complexity indoor visible light positioning system using an enhanced TDOA scheme. IEEE Photonics Journal, 10(4), 1–10. https://doi.org/10.1109/JPHOT.2018.2841831
El-Sheimy, N., & Li, Y. (2021). Indoor navigation: State of the art and future trends. Satellite Navigation, 2(1), 7.
Gong, G., Gan, C., Fang, Y., Zhu, Y., & Hu, Q. (2022). Link-Blockage model and AP-Placement scheme for No-Blockage link between AGV and AP in Logistics-Warehousing VLC network. Photonics, 10, 31.
Groves, P. (2008). Principles of GNSS, inertial, and multi-sensor integrated navigation systems. Norwood, MA: Artech House.
Guan, W., Huang, L., Hussain, B., & Yue, C. P. (2022). Robust robotic localization using visible light positioning and inertial fusion. IEEE Sensors Journal, 22(6), 4882–4892. https://doi.org/10.1109/JSEN.2021.3053342
Hosseinianfar, H., & Brandt-Pearce, M. (2020). Cooperative passive pedestrian detection and localization using a visible light communication access network. IEEE Open Journal of the Communications Society, 1, 1325–1335. https://doi.org/10.1109/OJCOMS.2020.3020574
Hua, L., Zhuang, Y., Qi, L., Yang, J., & Shi, L. (2018). Noise analysis and modeling in visible light communication using allan variance. IEEE Access, 6(74), 320.
Hua, L., Zhuang, Y., Li, Y., Wang, Q., Zhou, B., Qi, L., Yang, J., Cao, Y., & Haas, H. (2021). FusionVLP: The fusion of photodiode and camera for visible light positioning. IEEE Transactions on Vehicular Technology. https://doi.org/10.1109/TVT.2021.3115232
Huang, H., Feng, L., Guo, P., Yang, A., & Ni, G. (2016). Iterative positioning algorithm to reduce the impact of diffuse reflection on an indoor visible light positioning system. Optical Engineering. https://doi.org/10.1117/1.OE.55.6.066117
Isam Younus, O., Chaudhary, N., Nazari Chaleshtori, Z., Ghassemlooy, Z., Nero Alves, L., & Zvanovec, S. (2021). The impact of blocking and shadowing on the indoor visible light positioning system. In: 2021 IEEE 32nd Annual International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC), pp 1–6, https://doi.org/10.1109/PIMRC50174.2021.9569377
Jung, S. Y., Lee, S. R., & Park, C. S. (2014). Indoor location awareness based on received signal strength ratio and time division multiplexing using light-emitting diode light. Optical Engineering, 53(1), 016. https://doi.org/10.1117/1.OE.53.1.016106
Kahn, J., & Barry, J. (1997). Wireless infrared communications. Proceedings of the IEEE, 85(2), 265–298. https://doi.org/10.1109/5.554222
Keskin, M. F., & Gezici, S. (2016). Comparative theoretical analysis of distance estimation in visible light positioning systems. Journal of Lightwave Technology, 34(3), 854–865. https://doi.org/10.1109/JLT.2015.2504130
Kim, D., Park, J. K., & Kim, J. T. (2019). Three-dimensional VLC positioning system model and method considering receiver tilt. IEEE Access, 7(132), 205. https://doi.org/10.1109/ACCESS.2019.2940759
Kim, D., Park, J. K., & Kim, J. T. (2022). Single LED, Single PD-Based adaptive bayesian tracking method. Sensors, 22, 6488.
Li, L., Hu, P., & Peng, C., Shen, G., Zhao, F. (2014). Epsilon: A visible light based positioning system. In: Proceedings of the 11th USENIX Conference on Networked Systems Design and Implementation, USENIX Association, USA, NSDI’14, p 331-343
Li, X., Wang, X., Liao, J., Li, X., Li, S., & Lyu, H. (2021). Semi-tightly coupled integration of multi-GNSS PPP and S-VINS for precise positioning in GNSS-challenged environments. Satellite Navigation, 2, 1–14.
Li, Z., Yang, A., Lv, H., Feng, L., & Song, W. (2017). Fusion of visible light indoor positioning and inertial navigation based on particle filter. IEEE Photonics Journal, 9(5), 1–13. https://doi.org/10.1109/JPHOT.2017.2733556
Li, Z., Zhao, X., Zhao, Z., & Braun, T. (2023). CrowdFusion: Multisignal fusion SLAM positioning leveraging visible light. IEEE Internet of Things Journal, 10(13), 065.
Liang, Q., Lin, J., & Liu, M. (2019). Towards Robust Visible Light Positioning Under LED Shortage by Visual-inertial Fusion. 2019 International Conference on Indoor Positioning and Indoor Navigation (IPIN) (pp. 1–8). Pisa, Italy: IEEE.
Qin, T., Li, P., & Shen, S. (2018). Vins-mono: A robust and versatile monocular visual-inertial state estimator. IEEE Transactions on Robotics, 34(4), 1004–1020.
Santerre, R., Geiger, A., & Banville, S. (2017). Geometry of GPS dilution of precision: Revisited. GPS Solutions, 21(4), 1747–1763.
Sheikholeslami, S. M., Fazel, F., Abouei, J., & Plataniotis, K. N. (2021). Sub-decimeter VLC 3D indoor localization with handover probability analysis. IEEE Access, 9(122), 236. https://doi.org/10.1109/ACCESS.2021.3108173
Singh, A., Salameh, H. A. B., Ayyash, M., & Elgala, H. (2024). Efficient Power Allocation and Saving Framework for VLC-Enabled Indoor Networks With Crowded Heterogeneous Obstacles. IEEE Sensors Journal, 24(15), 491.
Solà, J. (2017). Quaternion kinematics for the error-state kalman filter. arXiv:1711.02508
Sun, X., Zhuang, Y., Huai, J., Hua, L., Chen, D., Li, Y., Cao, Y., & Chen, R. (2022). RSS-based visible light positioning using nonlinear optimization. IEEE Internet of Things Journal. https://doi.org/10.1109/JIOT.2022.3156616
Tang, H., Zhang, T., Niu, X., Fan, J., & Liu, J. (2022). Impact of the earth rotation compensation on MEMS-IMU preintegration of factor graph optimization. IEEE Sensors Journal. https://doi.org/10.1109/JSEN.2022.3192552
Tang, T., Shang, T., & Li, Q. (2021). Impact of multiple shadows on visible light communication channel. IEEE Communications Letters, 25(2), 513–517. https://doi.org/10.1109/LCOMM.2020.3031645
Vuong, D., Son, T., Ai, D., Do, T.H., Huynh, C., Tan, H.N., Quang, N., & Tttvinh (2022). A novel integrated model for positioning indoor MISO VLC exploiting non-light-of-sight communication. In: 2022 16th International Conference on Ubiquitous Information Management and Communication (IMCOM), pp 1–5, https://doi.org/10.1109/IMCOM53663.2022.9721812
Wang, Q., Luo, H., Men, A., Zhao, F., Gao, X., Wei, J., Zhang, Y., & Huang, Y. (2018). Light positioning: A high-accuracy visible light indoor positioning system based on attitude identification and propagation model. International Journal of Distributed Sensor Networks. https://doi.org/10.1177/1550147718758263
Xu, W., Cai, Y., He, D., Lin, J., & Zhang, F. (2022). Fast-lio2: Fast direct lidar-inertial odometry. IEEE Transactions on Robotics, 38(4), 2053–2073. https://doi.org/10.1109/TRO.2022.3141876
Yang, H., Zhong, W. D., Chen, C., Alphones, A., & Du, P. (2020). QoS-driven optimized design-based integrated visible light communication and positioning for indoor IoT networks. IEEE Internet of Things Journal, 7(1), 269–283. https://doi.org/10.1109/JIOT.2019.2951396
Yang, S. H., Kim, H. S., Son, Y. H., & Han, S. K. (2014). Three-dimensional visible light indoor localization using AOA and RSS with multiple optical receivers. Journal of Lightwave Technology, 32(14), 2480–2485. https://doi.org/10.1109/JLT.2014.2327623
Yang, X., Zhuang, Y., Shi, M., Sun, X., Cao, X., & Zhou, B. (2024). Ratiovlp: Ambient light noise evaluation and suppression in the visible light positioning system. IEEE Transactions on Mobile Computing, 23(5), 5755–5769.
Yasir, M., Ho, S. W., & Vellambi, B. N. (2014). Indoor positioning system using visible light and accelerometer. Journal of Lightwave Technology, 32(19), 3306–3316. https://doi.org/10.1109/JLT.2014.2344772
Yu, H., & Wilamowski, B.M. (2018). Levenberg-marquardt training, CRC Press, Boca Raton, FL, pp 12–1–12–16
Zhang, R., Liu, Z., Qian, K., Zhang, S., Du, P., Chen, C., & Alphones, A. (2020). Outage bridging and trajectory recovery in visible light positioning using insufficient rss information. IEEE Access, 8(162), 302. https://doi.org/10.1109/ACCESS.2020.3020874
Zhou, B., Lau, V., Chen, Q., & Cao, Y. (2018). Simultaneous positioning and orientating for visible light communications: Algorithm design and performance analysis. IEEE Transactions on Vehicular Technology. https://doi.org/10.1109/TVT.2018.2875044
Zhou, B., Liu, A., & Lau, V. (2019). Performance limits of visible light-based user position and orientation estimation using received signal strength under NLOS propagation. IEEE Transactions on Wireless Communications, 18(11), 5227–5241. https://doi.org/10.1109/TWC.2019.2934689
Zhu, B., Zhu, Z., Wang, Y., & Cheng, J. (2019). Optimal optical omnidirectional angle-of-arrival estimator with complementary photodiodes. Journal of Lightwave Technology, 37(13), 2932–2945. https://doi.org/10.1109/JLT.2019.2907969
Zhuang, Y., & El-Sheimy, N. (2016). Tightly-coupled integration of wifi and mems sensors on handheld devices for indoor pedestrian navigation. IEEE Sensors Journal, 16(1), 224–234.
Zhuang, Y., Hua, L., Qi, L., Yang, J., Cao, P., Cao, Y., Wu, Y., Thompson, J., & Haas, H. (2018). A survey of positioning systems using visible LED lights. IEEE Communications Surveys & Tutorials, 20(3), 1963–1988. https://doi.org/10.1109/COMST.2018.2806558
Zhuang, Y., Sun, X., Li, Y., Huai, J., Hua, L., Yang, X., Cao, X., Zhang, P., Cao, Y., Qi, L., Yang, J., El-Bendary, N., El-Sheimy, N., Thompson, J., & Chen, R. (2023). Multi-sensor integrated navigation/positioning systems using data fusion: From analytics-based to learning-based approaches. Information Fusion, 95, 62–90. https://doi.org/10.1016/j.inffus.2023.01.025
Zhuang, Y., Wang, Y., Yang, X., & Ma, T. (2024). Visible light positioning system using a smartphone’s built-in ambient light sensor and inertial measurement unit. Optics Letters, 49, 2105.
Zou, Q., Xia, W., Zhu, Y., Zhang, J., Huang, B., Yan, F., & Shen, L. (2017). A VLC and IMU integration indoor positioning algorithm with weighted unscented Kalman filter. In: 2017 3rd IEEE International Conference on Computer and Communications (ICCC), pp 887–891, https://doi.org/10.1109/CompComm.2017.8322671
Acknowledgements
The authors would like to acknowledge Prof. Xiaoji Niu and the Integrated and Intelligent Navigation (i2Nav) group from Wuhan University for providing the OB_GINS software and Honeywell HGUIDE i300 IMU that were used in this research. We also acknowledge Tengfei Yu, Xuan Wang, and Xiaoxiang Cao for their contribution to the experiments.
Funding
This work was supported in part by the National Key Research and Development Program of China (International Scientific and Technological Cooperation Program) under Grant 2022YFE0139300; in part by the National Natural Science Foundation of China under Grant 42374047; in part by the Guangdong Basic and Applied Basic Research Foundation under Grant 2022B1515120067; in part by the Key Research and Development Program of Hubei Province (International Scientific and Technological Cooperation Program) under Grant 2023EHA036; and in part by Wuhan AI Innovation Program under Grant 2023010402040011.
Author information
Authors and Affiliations
Contributions
Y.Z proposed the idea of tightly integrating VLP and INS and revised the manuscript. X.S is responsible for formula derivation, experimental calculation, analysis and writing the draft manuscript. X.Y and T.H participated in the experiments. J.H provided program guidance and revised the manuscript. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no Conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix: Disturbance of Lambertian model
Appendix: Disturbance of Lambertian model
Based on the error disturbance (Groves, 2008) of DCM \({\varvec{C}}_v^u\) with respect of a small angle \(d\varvec{\phi }=d\varvec{\phi }^u_{vu}\),
we have
Based on equation (5), we derive the complete differential considering the position and attitude variables:
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Sun, X., Zhuang, Y., Yang, X. et al. Tightly coupled VLP/INS integrated navigation by inclination estimation and blockage handling. Satell Navig 6, 7 (2025). https://doi.org/10.1186/s43020-025-00158-9
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1186/s43020-025-00158-9