• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
  • DOI
  • UT
成果搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:介邓飞

Refining:

Type

Submit Unfold

Language

Submit

Clean All

Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 4 >
Real-time recognition research for an automated egg-picking robot in free-range duck sheds SCIE
期刊论文 | 2025 , 22 (2) | JOURNAL OF REAL-TIME IMAGE PROCESSING
Abstract&Keyword Cite

Abstract :

Achieving efficient and accurate detection and localization of duck eggs in the unstructured environment of free-range duck sheds is crucial for developing automated egg-picking robots. This paper proposes an improved YOLOv5s-based model (YOLOv5s-MNKS) designed to enhance detection performance, reduce model complexity, and improve the robot's adaptability and operational efficiency in complex environments. The model utilizes MobileNetV3 as the backbone network, reducing the number of parameters and increasing detection speed. The Squeeze-and-Excitation Module is replaced with a Normalization-based Attention Module to improve feature extraction capability. Group Shuffle Convolution and Bidirectional Feature Pyramid Network are introduced in the Neck layer, enhancing multi-scale feature fusion while reducing parameter count. A Soft-CIoU-NMS loss function is also designed, which improves detection accuracy in scenarios involving dense stacking and occlusion by lowering the confidence of overlapping bounding boxes instead of directly eliminating them. Experimental results demonstrate that the mAP of YOLOv5s-MNKS reaches 95.6%, representing a 0.3% improvement over the original model, while the model size is reduced to 5.7 MB, approximately 40% of the original size. When deployed on the Jetson Nano embedded platform with TensorRT acceleration, the model achieves a detection frame rate of 22.3 frames per second. In simulated and real-world duck shed scenarios, the improved model accurately and quickly identifies and locates duck eggs in complex environments, including occlusion, stacking, and low lighting, demonstrating strong robustness and applicability. This research provides technical support for the future development of duck egg-picking robots.

Keyword :

Attention mechanism Attention mechanism Duck egg detection Duck egg detection Duck egg-picking robot Duck egg-picking robot Lightweight model Lightweight model YOLOv5s YOLOv5s

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Jie, Dengfei , Wang, Jun , Wang, Hao et al. Real-time recognition research for an automated egg-picking robot in free-range duck sheds [J]. | JOURNAL OF REAL-TIME IMAGE PROCESSING , 2025 , 22 (2) .
MLA Jie, Dengfei et al. "Real-time recognition research for an automated egg-picking robot in free-range duck sheds" . | JOURNAL OF REAL-TIME IMAGE PROCESSING 22 . 2 (2025) .
APA Jie, Dengfei , Wang, Jun , Wang, Hao , Lv, Huifang , He, Jincheng , Wei, Xuan . Real-time recognition research for an automated egg-picking robot in free-range duck sheds . | JOURNAL OF REAL-TIME IMAGE PROCESSING , 2025 , 22 (2) .
Export to NoteExpress RIS BibTex

Version :

YKD-SLAM: a visual SLAM system in dynamic environments based on object detection and region segmentation SCIE
期刊论文 | 2025 , 36 (10) | MEASUREMENT SCIENCE AND TECHNOLOGY
Abstract&Keyword Cite

Abstract :

Simultaneous localization and map building (SLAM) is crucial in autonomous robot navigation. However, existing SLAM systems generally assume a static environment, which makes it difficult to cope with the interference caused by moving objects in dynamic scenes, affecting the system's localization accuracy and robustness. To address this challenge, this paper proposes YKD-SLAM, a visual SLAM system for indoor dynamic environments, which is based on the ORB-SLAM2 framework and incorporates YOLOv8 target detection, RCF-KMeans (Region-ConstrainedFastK-Means), and epipolar geometric constraints to realize the accurate rejection of dynamic feature points and improve the localization performance in dynamic environments. YKD-SLAM first uses YOLOv8 to detect dynamic objects in the scene, generates a detection frame, optimizes the depth map through open operations, and performs multi-region segmentation of the region within the detection frame by combining RCF-KMeans. Subsequently, through the dynamic feature point rejection strategy based on epipolar geometric constraints, different regions in the detection frame are discriminated into dynamic and static regions, and the feature points in the dynamic region are rejected to improve the localization accuracy and robustness of the system in dynamic environments. The experimental results show that YKD-SLAM performs well in several dynamic scenes in the TUMRGB-D dataset. Compared with ORB-SLAM2, its ATE is reduced by 98.37%; compared with DynaSLAM, the system operation efficiency is improved by 95.35%. In addition, practical experiments conducted in indoor dynamic scenes further validate its potential in real applications.

Keyword :

dynamic environments dynamic environments epipolar geometry constraint epipolar geometry constraint feature point culling feature point culling visual SLAM visual SLAM YOLOv8 YOLOv8

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Qiu, Haofeng , Wang, Jun , Lin, Zhipeng et al. YKD-SLAM: a visual SLAM system in dynamic environments based on object detection and region segmentation [J]. | MEASUREMENT SCIENCE AND TECHNOLOGY , 2025 , 36 (10) .
MLA Qiu, Haofeng et al. "YKD-SLAM: a visual SLAM system in dynamic environments based on object detection and region segmentation" . | MEASUREMENT SCIENCE AND TECHNOLOGY 36 . 10 (2025) .
APA Qiu, Haofeng , Wang, Jun , Lin, Zhipeng , Fan, Jiating , He, Jincheng , Jie, Dengfei . YKD-SLAM: a visual SLAM system in dynamic environments based on object detection and region segmentation . | MEASUREMENT SCIENCE AND TECHNOLOGY , 2025 , 36 (10) .
Export to NoteExpress RIS BibTex

Version :

SDGTrack: A Multi-Target Tracking Method for Pigs in Multiple Farming Scenarios SCIE
期刊论文 | 2025 , 15 (11) | ANIMALS
Abstract&Keyword Cite

Abstract :

In pig farming, multi-object tracking (MOT) algorithms are effective tools for identifying individual pigs and monitoring their health, which enhances management efficiency and intelligence. However, due to the considerable variation in breeding environments across different pig farms, existing models often struggle to perform well in unfamiliar settings. To enhance the model's generalization in diverse tracking scenarios, we have innovatively proposed the SDGTrack method. This method improves tracking performance across various farming environments by enhancing the model's adaptability to different domains and integrating an optimized tracking strategy, significantly increasing the generalization of group pig tracking technology across different scenarios. To comprehensively evaluate the potential of the SDGTrack method, we constructed a multi-scenario dataset that includes both public and private data, spanning ten distinct pig farming environments. We only used a portion of the daytime scenes as the training set, while the remaining daytime and nighttime scenes were used as the validation set for evaluation. The experimental results demonstrate that SDGTrack achieved a MOTA score of 80.9%, an IDSW of 24, and an IDF1 score of 85.1% across various scenarios. Compared to the original CSTrack method, SDGTrack improved the MOTA and IDF1 scores by 16.7% and 33.3%, respectively, while significantly reducing the number of ID switches by 94.6%. These findings indicate that SDGTrack offers robust tracking capabilities in previously unseen farming environments, providing a strong technical foundation for monitoring pigs in different settings.

Keyword :

computer vision computer vision group-housed pigs group-housed pigs multi-object tracking multi-object tracking multi-scene generalization multi-scene generalization

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Liu, Tao , Jie, Dengfei , Zhuang, Junwei et al. SDGTrack: A Multi-Target Tracking Method for Pigs in Multiple Farming Scenarios [J]. | ANIMALS , 2025 , 15 (11) .
MLA Liu, Tao et al. "SDGTrack: A Multi-Target Tracking Method for Pigs in Multiple Farming Scenarios" . | ANIMALS 15 . 11 (2025) .
APA Liu, Tao , Jie, Dengfei , Zhuang, Junwei , Zhang, Dehui , He, Jincheng . SDGTrack: A Multi-Target Tracking Method for Pigs in Multiple Farming Scenarios . | ANIMALS , 2025 , 15 (11) .
Export to NoteExpress RIS BibTex

Version :

生猪智能饲喂养殖装备研究进展与展望
期刊论文 | 2025 , 46 (11) , 10-18 | 饲料工业
Abstract&Keyword Cite

Abstract :

生猪养殖是我国农业经济发展的基础性产业,而生猪精准饲喂和种猪性能测定是生猪养殖的关键环节。传统饲喂养殖装备存在饲料浪费严重、人工劳动强度大、影响生猪生产力水平等问题。随着我国生猪养殖向着规模化、标准化和智能化方向快速发展,生猪电子饲喂站和种猪性能测定站的广泛应用提高了养殖管理水平和养殖场的生产效率。因此,生猪智能饲喂养殖装备已成为目前的研究热点。文章针对国内外不同饲养阶段的生猪电子饲喂站的机械结构、系统的功能特点进行对比分析;基于生猪科学育种的需求,阐述了国内外种猪性能测定站的工作原理和关键技术;最后,总结并展望了在新一代信息感知和人工智能技术的快速发展背景下,生猪电子饲喂站和种猪性能测定站发展趋势,以期为我国现代生猪智能饲喂养殖装备的完善提供重要参考。

Keyword :

信息感知 信息感知 性能测定站 性能测定站 智能化管理 智能化管理 生猪养殖 生猪养殖 电子饲喂站 电子饲喂站

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 介邓飞 , 李天乐 , 姜朋辉 et al. 生猪智能饲喂养殖装备研究进展与展望 [J]. | 饲料工业 , 2025 , 46 (11) : 10-18 .
MLA 介邓飞 et al. "生猪智能饲喂养殖装备研究进展与展望" . | 饲料工业 46 . 11 (2025) : 10-18 .
APA 介邓飞 , 李天乐 , 姜朋辉 , 王杨 , 何金成 , 沈美雄 . 生猪智能饲喂养殖装备研究进展与展望 . | 饲料工业 , 2025 , 46 (11) , 10-18 .
Export to NoteExpress RIS BibTex

Version :

基于红外与RGB图像配准的生猪体温检测方法研究
期刊论文 | 2025 , 61 (05) , 410-418 | 中国畜牧杂志
Abstract&Keyword Cite

Abstract :

针对红外热成像细节模糊、成像效果差且易受环境因素影响的问题,本文提出了一种基于红外与RGB图像配准的生猪体温检测方法。本研究收集了484组生猪直肠体温数据、面部红外与RGB图像以及环境信息(风速、温湿度和光照强度),通过相机坐标变换实现红外与RGB图像的配准并应用YOLO v8目标检测算法提取生猪耳朵、眼睛和鼻子区域的温度信息,基于这些重要特征变量建立以多个集成学习模型为底层、多层感知机神经网络为元模型的体温反演堆叠模型。结果显示:利用YOLO v8对RGB图像和红外热图像进行目标检测建模,RGB模型的平均精度均值(mAP)比红外模型高出31.66%,且基于相机坐标变换的红外与RGB图像配准技术的配准误差在1像素以内。此外,基于三层集成堆叠模型的体温预测均方根误差(RMSE)为0.19℃,平均绝对误差(MAE)为0.14℃,相比多元线性回归和多层感知机模型,MAE分别降低了48.33%和37.67%。因此,本研究通过红外与RGB图像配准有效消除了检测模糊问题,并利用改进的堆叠模型实现了更高的体温预测准确性,满足猪场测温要求。

Keyword :

三层集成堆叠模型 三层集成堆叠模型 体温检测 体温检测 图像配准 图像配准 生猪 生猪 红外热成像 红外热成像

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 介邓飞 , 申恒尉 , 李家俊 et al. 基于红外与RGB图像配准的生猪体温检测方法研究 [J]. | 中国畜牧杂志 , 2025 , 61 (05) : 410-418 .
MLA 介邓飞 et al. "基于红外与RGB图像配准的生猪体温检测方法研究" . | 中国畜牧杂志 61 . 05 (2025) : 410-418 .
APA 介邓飞 , 申恒尉 , 李家俊 , 李天乐 , 何金成 , 沈美雄 . 基于红外与RGB图像配准的生猪体温检测方法研究 . | 中国畜牧杂志 , 2025 , 61 (05) , 410-418 .
Export to NoteExpress RIS BibTex

Version :

A Two-Stage Lightweight Model for Pig Pain Recognition Based on Yolov8-Snmt EI
期刊论文 | 2025 | SSRN
Abstract&Keyword Cite

Abstract :

Pig health and welfare concerns are gaining increasing attention, and accurately monitoring signs of pain has become a crucial tool for effective management and disease prevention. This paper proposes YOLOv8-SNMT, a two-stage lightweight pain recognition model focused on the facial regions of interest in pigs. In the object detection stage, the enhanced YOLOv8n-SlimNeck model optimises the Neck structure for rapid and precise localisation of pig facial regions, reducing interference from irrelevant areas. In the pain classification stage, the MobileNetV3-TACH model is introduced, replacing the traditional Squeeze-and-Excitation (SE) attention mechanism with a Triplet Attention module to form the TABneck structure. This enhancement improves cross-dimensional feature interactions, thereby enhancing the extraction of pain-related features and reducing model complexity. Additionally, the Classify-Head module of YOLOv8n is employed to further improve pain classification accuracy. On the self-constructed pig facial dataset, the proposed method achieves a facial region detection accuracy of 95.1% on the test set, surpassing YOLOv5n, YOLOv6n, and YOLOv8n by 2.4, 2.3, and 2.4 percentage points, respectively. Moreover, the model's parameter size and Floating-Point Operations (FLOPs) are significantly reduced. In the pain classification task, the MobileNetV3-TACH model achieves an accuracy of 99.7%, representing an average improvement of approximately 5.0, 3.6, 5.1, and 3.8 percentage points over EfficientNetV2, ResNet34, ShuffleNetV2, and MobileNetV3, respectively. The experimental results show that the proposed method achieves an optimal balance between classification accuracy and computational efficiency, meeting the requirements for real-time monitoring in resource-constrained environments. This offers crucial technical support for the intelligent monitoring of pig health and welfare. © 2025, The Authors. All rights reserved.

Keyword :

Disease control Disease control mHealth mHealth

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Jie, Dengfei , Li, Tianle , Wang, Yang et al. A Two-Stage Lightweight Model for Pig Pain Recognition Based on Yolov8-Snmt [J]. | SSRN , 2025 .
MLA Jie, Dengfei et al. "A Two-Stage Lightweight Model for Pig Pain Recognition Based on Yolov8-Snmt" . | SSRN (2025) .
APA Jie, Dengfei , Li, Tianle , Wang, Yang , Jiang, Penghui , He, Jincheng , Shen, Hengwei et al. A Two-Stage Lightweight Model for Pig Pain Recognition Based on Yolov8-Snmt . | SSRN , 2025 .
Export to NoteExpress RIS BibTex

Version :

Research on a Pig Pain Perception Model Based on YOLOv8-SN Cascade ResMHANet ESCI
期刊论文 | 2025 , 17 (1) | ACTA UNIVERSITATIS SAPIENTIAE INFORMATICA
Abstract&Keyword Cite

Abstract :

Pig health and welfare are garnering increasing attention, and the accurate monitoring of pain indicators has become a crucial tool for effective management and disease prevention. The Pig Grimace Scale (PGS) assesses pain by analyzing facial expressions in three key regions: the ears, eyes, and snout. However, existing pain classification models typically focus on features from only one of these regions, limiting their ability to fully meet the comprehensive assessment requirements of the PGS. To address this limitation, this study introduces a YOLOv8-SlimNeck (YOLOv8-SN) cascade Residual Multi-Head Attentional Feature Fusion Network (ResMHANet) model for pain perception in pigs. First, a lightweight One-shot Aggregation Strategy to Design the Efficient Cross Stage Partial Network Module (VoV-GSCSP) is integrated into the YOLOv8 architecture to form its Neck layer, enabling precise localization of the pig's facial regions while effectively reducing the impact of background noise and irrelevant information on classification performance. Next, the ResMHANet model is developed to extract deep, pain-related features using a residual structure. The Multi-Head Module directs attention to multiple key facial regions, enhancing the model's ability to focus on pain-related features in each area. The Attentional Feature Fusion (AFF) combines the deep features extracted from the residual structure with the multi-region features from the Multi-Head Module, further improving the model's capacity to perceive and extract pain-related information from the pig's face. On the self-constructed pig facial image dataset, the YOLOv8-SN model achieves a facial region recognition precision of 96.5% mAP@0.5:0.95 on the test set, representing improvements of 2.3, 0.5, and 2.2 percentage points compared to YOLOv5, YOLOv8, and YOLOv12, respectively. Meanwhile, the model's Parameters and Floating Point Operations (FLOPs) are significantly reduced. The ResMHANet model achieves an F1-Score of 95.1% in the pain classification task, representing improvements of 2.0, 7.0, 2.6, and 20.0 percentage points over ResNet34, MobileNetV3, ShuffleNetV2, and MobileViT, respectively. The experimental results demonstrate that the proposed model aligns better with the PGS evaluation criteria and offers a reliable solution for non-contact pig pain recognition, thereby advancing the development of intelligent pig farming.

Keyword :

AFF AFF Multi-Head module Multi-Head module PGS PGS Pig pain recognition Pig pain recognition YOLOv8-SN YOLOv8-SN

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Jie, Dengfei , Li, Tianle , Wang, Yang et al. Research on a Pig Pain Perception Model Based on YOLOv8-SN Cascade ResMHANet [J]. | ACTA UNIVERSITATIS SAPIENTIAE INFORMATICA , 2025 , 17 (1) .
MLA Jie, Dengfei et al. "Research on a Pig Pain Perception Model Based on YOLOv8-SN Cascade ResMHANet" . | ACTA UNIVERSITATIS SAPIENTIAE INFORMATICA 17 . 1 (2025) .
APA Jie, Dengfei , Li, Tianle , Wang, Yang , Jiang, Penghui , He, Jincheng , Shen, Hengwei et al. Research on a Pig Pain Perception Model Based on YOLOv8-SN Cascade ResMHANet . | ACTA UNIVERSITATIS SAPIENTIAE INFORMATICA , 2025 , 17 (1) .
Export to NoteExpress RIS BibTex

Version :

Pig cough detection using deep features and an improved SKA-TDNN model SCIE
期刊论文 | 2025 , 57 (8) | TROPICAL ANIMAL HEALTH AND PRODUCTION
Abstract&Keyword Cite

Abstract :

Pig coughing is an important acoustic indicator for the early detection of respiratory diseases in swine. Traditional monitoring relies heavily on manual inspection, which is labour-intensive and increases the risk of cross-infection. Therefore, intelligent detection of cough sounds using audio-based methods is essential for improving disease prevention and breeding efficiency. However, most existing studies focus on traditional audio features, such as Mel-Frequency Cepstral Coefficients (MFCCs) and filter bank (F-bank) features, which often struggle to maintain recognition accuracy in the complex acoustic environments of pig farms. This study proposes a novel framework that employs deep feature representations as an alternative to handcrafted features, thereby capturing more robust acoustic patterns. In addition, multiple data augmentation techniques are applied to enhance data diversity and model generalisation. Building on the Time Delay Neural Network (TDNN) architecture, we further design a Simplified Kernelized Attention TDNN (SKA-TDNN) model, which integrates a lightweight attention mechanism to improve temporal feature modelling while significantly reducing the number of parameters. Experimental results show that models trained on deep features outperform those based on MFCC and F-bank features under various evaluation metrics. When compared against mainstream architectures including Convolutional Neural Networks (CNNs), ECAPA-TDNN, and conventional TDNNs, the proposed SKA-TDNN achieves the best performance, reaching an overall accuracy of 98.9%, with only 5.83 MB of parameters. These findings highlight the novelty and practical value of introducing deep features with a lightweight attention-enhanced TDNN for animal cough detection. Beyond swine, the proposed framework provides a promising and generalisable approach for intelligent respiratory disease monitoring in other livestock and domestic animals. Moreover, the system is particularly suited for deployment in modern large-scale farming environments, where automated, real-time health monitoring is essential for precision livestock management.

Keyword :

Data augmentation Data augmentation Deep features Deep features Pig cough Pig cough Time Delay Neural Network (TDNN) Time Delay Neural Network (TDNN)

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Jie, Dengfei , Jiang, Penghui , Li, Tianle et al. Pig cough detection using deep features and an improved SKA-TDNN model [J]. | TROPICAL ANIMAL HEALTH AND PRODUCTION , 2025 , 57 (8) .
MLA Jie, Dengfei et al. "Pig cough detection using deep features and an improved SKA-TDNN model" . | TROPICAL ANIMAL HEALTH AND PRODUCTION 57 . 8 (2025) .
APA Jie, Dengfei , Jiang, Penghui , Li, Tianle , Wang, Yang , He, Jincheng . Pig cough detection using deep features and an improved SKA-TDNN model . | TROPICAL ANIMAL HEALTH AND PRODUCTION , 2025 , 57 (8) .
Export to NoteExpress RIS BibTex

Version :

生猪背膘厚度无接触检测方法研究
期刊论文 | 2025 , 0 | 中国农机化学报
Abstract&Keyword Cite

Abstract :

针对单模态生猪背膘厚度检测方法忽略了生猪体尺信息,以及背部三维信息与生猪背膘厚度的联系,导致模型泛化能力不足与检测精度不能进一步提升等问题,提出一种基于多模态融合的生猪背膘厚度无接触检测方法。通过图像配准与坐标转换方法获取生猪的体尺信息,利用体尺信息、生猪背部深度图像以及RGB图像数据,建立三种模态的七种数据集,并对比单模态、双模态以及多模态模型的检测精度。通过引入大型选择性核模型(LSK)与全维动态卷积(ODConv),提升模型感受野与提取全维特征的能力。最终,提出自注意深度平衡多模态融合算法(SDE)解决现有多模态融合算法存在的跨模态数据难以交互与特征丢失的问题。试验结果显示增加数据模态可以有效提升模型的检测精度;与原始模型相比,引入LSK、ODConv、SDE模型后MAE、RMSE、MAPE分别降低30.94%、28.73%、29.82%,达到0.36mm、0.57mm、2.27%,R2提升6.07%,为0.94。基于多模态融合的生猪背膘厚度无接触检测方法精度满足实际生产中对生猪背膘厚度检测精度的需求,可推动生猪背膘厚度检测技术进一步高质量发展。

Keyword :

多模态融合 多模态融合 深度学习 深度学习 特征提取 特征提取 生猪 生猪 背膘厚度 背膘厚度

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 介邓飞 , 李家俊 , 王杨 et al. 生猪背膘厚度无接触检测方法研究 [J]. | 中国农机化学报 , 2025 : 0 .
MLA 介邓飞 et al. "生猪背膘厚度无接触检测方法研究" . | 中国农机化学报 (2025) : 0 .
APA 介邓飞 , 李家俊 , 王杨 , 姜朋辉 , 沈美雄 , 何金成 . 生猪背膘厚度无接触检测方法研究 . | 中国农机化学报 , 2025 , 0 .
Export to NoteExpress RIS BibTex

Version :

Non-Destructive Prediction Approach for Pomelo Granulation Quality Using MRI and Hyperspectral Imaging Technology EI
期刊论文 | 2024 | SSRN
Abstract&Keyword Cite

Abstract :

Pomelo is prone to juice sac granulation during ripening and storage processes, resulting in decreased moisture content, firm flesh, and a diminished flavor, which impacts its market value and consumer experience. Due to the complex reasons for juice sac granulation, combined with the relatively thick peel of pomelo, the detection methods for pomelo are lacking, and the difficulty of detection is high. To address the subjectivity and inability to achieve non-destructive detection in traditional granulation rate calculations, this study proposes a non-destructive detection method for assessing pomelo granulation quality based on nuclear magnetic resonance imaging (MRI) and hyperspectral imaging technology. The granulation rate is calculated using MRI technology to obtain a standardized and consistent granulation value. Combining the granulation value obtained from MRI technology with the spectral information of pomelo obtained through hyperspectral imaging technology, a spectral network called Shuffle_spcc is constructed using the ShuffleNet structure, widely employed in deep learning. Moreover, based on the granulation model Shuffle_spcc, a multi-output model is built using the principles of transfer learning to achieve the simultaneous prediction of multiple internal quality attributes, thus creating a predictive model. The experimental results indicate that, compared to Partial Least Squares Regression (PLSR) and Principal Component Regression (PCR) models, the Shuffle_spcc model achieves the best prediction results for granulation rate. The R in the training and validation sets reach 0.82 and 0.80, respectively. The root mean square error calibration (RMSEC) and root mean square error prediction (RMSEP) are 2.83% and 2.61%, respectively. The prediction of sugar content also yields excellent results, with R of 0.84 and 0.81 achieved on the training and validation sets, respectively. However, the prediction results for acidity and moisture content are comparatively poor. Therefore, the method based on nuclear magnetic resonance imaging and hyperspectral imaging techniques can be applied to detect juice sac granulation in pomelo. In subsequent studies, this provides a valuable reference for non-destructive quality inspection of the internal quality of large-sized fruits. © 2024, The Authors. All rights reserved.

Keyword :

Deep learning Deep learning Forecasting Forecasting Granulation Granulation Hyperspectral imaging Hyperspectral imaging Infrared devices Infrared devices Learning systems Learning systems Least squares approximations Least squares approximations Magnetic resonance imaging Magnetic resonance imaging Mean square error Mean square error Moisture Moisture Moisture determination Moisture determination

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Jie, Dengfei , Li, Zhihong , Wu, Shuang et al. Non-Destructive Prediction Approach for Pomelo Granulation Quality Using MRI and Hyperspectral Imaging Technology [J]. | SSRN , 2024 .
MLA Jie, Dengfei et al. "Non-Destructive Prediction Approach for Pomelo Granulation Quality Using MRI and Hyperspectral Imaging Technology" . | SSRN (2024) .
APA Jie, Dengfei , Li, Zhihong , Wu, Shuang , Tian, Botao , Wang, Ping , Wei, Xuan . Non-Destructive Prediction Approach for Pomelo Granulation Quality Using MRI and Hyperspectral Imaging Technology . | SSRN , 2024 .
Export to NoteExpress RIS BibTex

Version :

10| 20| 50 per page
< Page ,Total 4 >

Export

Results:

Selected

to

Format:
Online/Total:202/15274
Address:FAFU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350002)
Copyright:FAFU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备10012082号