Results 11 to 20 of about 24,061,183 (328)

Model-Based Underwater 6D Pose Estimation From RGB

open access: yesIEEE Robotics and Automation Letters, 2023
Object pose estimation underwater allows an autonomous system to perform tracking and intervention tasks. Nonetheless, underwater target pose estimation is remarkably challenging due to, among many factors, limited visibility, light scattering, cluttered environments, and constantly varying water conditions.
Davide Sapienza   +7 more
openaire   +2 more sources

A simple layered RGB BRDF model [PDF]

open access: yesGraphical Models, 2003
Many natural objects, and general layered materials, have non-linear reflection behaviour along wavelengths. An accurate representation of phenomena such as interference and colour separation generally requires a fine spectral representation of light instead of the commonly used RGB components.
X. Granier, W. Heidrich
openaire   +2 more sources

Multi-Input Deep Learning Model with RGB and Hyperspectral Imaging for Banana Grading

open access: yesAgriculture, 2021
Grading is a vital process during the postharvest of horticultural products as it dramatically affects consumer preference and satisfaction when goods reach the market. Manual grading is time-consuming, uneconomical, and potentially destructive.
A. Mesa, John Chiang
semanticscholar   +1 more source

CMX: Cross-Modal Fusion for RGB-X Semantic Segmentation With Transformers [PDF]

open access: yesIEEE transactions on intelligent transportation systems (Print), 2022
Scene understanding based on image segmentation is a crucial component of autonomous vehicles. Pixel-wise semantic segmentation of RGB images can be advanced by exploiting complementary features from the supplementary modality ( ${X}$ -modality). However,
Huayao Liu   +4 more
semanticscholar   +1 more source

RGB-IR Cross-modality Person ReID based on Teacher-Student GAN Model [PDF]

open access: yesPattern Recognition Letters, 2020
RGB-Infrared (RGB-IR) person re-identification (ReID) is a technology where the system can automatically identify the same person appearing at different parts of a video when light is unavailable. The critical challenge of this task is the cross-modality
Ziyue Zhang   +4 more
semanticscholar   +1 more source

Clothes-Changing Person Re-identification with RGB Modality Only [PDF]

open access: yesComputer Vision and Pattern Recognition, 2022
The key to address clothes-changing person re-identification (re-id) is to extract clothes-irrelevant features, e.g., face, hairstyle, body shape, and gait.
Xinqian Gu   +5 more
semanticscholar   +1 more source

Assessing thermal imagery integration into object detection methods on air-based collection platforms

open access: yesScientific Reports, 2023
Object detection models commonly focus on utilizing the visible spectrum via Red–Green–Blue (RGB) imagery. Due to various limitations with this approach in low visibility settings, there is growing interest in fusing RGB with thermal Long Wave Infrared ...
James E. Gallagher, Edward J. Oughton
doaj   +1 more source

Estimation of Fv/Fm in Spring Wheat Using UAV-Based Multispectral and RGB Imagery with Multiple Machine Learning Methods

open access: yesAgronomy, 2023
The maximum quantum efficiency of photosystem II (Fv/Fm) is a widely used indicator of photosynthetic health in plants. Remote sensing of Fv/Fm using MS (multispectral) and RGB imagery has the potential to enable high-throughput screening of plant health
Qiang Wu   +6 more
doaj   +1 more source

SwinNet: Swin Transformer Drives Edge-Aware RGB-D and RGB-T Salient Object Detection [PDF]

open access: yesIEEE transactions on circuits and systems for video technology (Print), 2022
Convolutional neural networks (CNNs) are good at extracting contexture features within certain receptive fields, while transformers can model the global long-range dependency features.
Zhengyi Liu   +3 more
semanticscholar   +1 more source

Elevation Test Images for RGB, LWIR and RGB-LWIR Models

open access: yes, 2023
images extracted from footage recorded at 15 m (50 ft), 30 m (100 ft), 45 m (150 ft), 61 m (200 ft), 76 m (250 ft), 91 m (300 ft), 106 m (350 ft), and 121 m (400 ft). Five images will be extracted at every elevation for each image type. This will result in 120 images (5 RGB, LWIR and RGB-LWIR images at 8 elevations) per flight, with 600 labeled images (
Gallagher, James, Oughton, Edward
openaire   +1 more source

Home - About - Disclaimer - Privacy