+
Skip to main content
Log in

Detection of fresh tidiness in supermarket: a deep learning based approach

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Psychological research shows that the tidiness of fresh display in supermarkets will affect consumers’desire to buy to a great extent. Therefore, whether the supermarket administrator can organize the shelves in time and keep the fresh on the shelves neatly placed has a great impact on their sales performance. In order to solve this problem, we use the method of calculating the distance between the neatly placed pictures and the texture feature vector of the collected test images to measure the similarity of the two pictures, so as to determine whether the goods are neatly placed. HistNet is a texture classification network which takes ResNet as the baseline network and introduces histogram layer, which can classify images with texture features very well. Based on HistNet, we use large margin cosine loss(LMC loss) and CrossEntropy loss for joint training to improve the discrimination ability of the network by increasing inter-class differences and reducing intra-class differences. Then change the convolution method of learning bin center in the histogram, make the texture features learned by the histogram more abundant, and add convolutional block attention module(CBAM) attention before the first convolution in the histogram layer to further improve the network performance. On the basis of the optimized network, we remove the last layer and measure the extracted texture features directly. We do a large number of experiments on a variety of distance measurement methods and choose cosine distance to judge the tidiness of fresh display in the supermarket. We tested the improved algorithm on DTD, MINC-2500 and GTOS-mobile datasets, and the accuracy reached 71.79%, 83.21% and 81.24%, respectively. The optimized network improves the accuracy of texture classification. In the application of supermarket fresh tidiness detection, it can well distinguish between tidiness and confusion, and has a broad application prospect.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+
from $39.99 /Month
  • Starting from 10 chapters or articles per month
  • Access and download chapters and articles from more than 300k books and 2,500 journals
  • Cancel anytime
View plans

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Data Availability

The raw/processed data required to reproduce these findings cannot beshared at this time as the data also forms part of an ongoing study.

References

  1. Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. In: neural information processing systems, pp 1097–1105

  2. Zang Y, Fu C, Yang D (2023) Transformer fusion and histogram layer multispectral pedestrian detection network. SIViP 1–9

  3. Hu W, Fu C, Cao R (2023) Joint dual-stream interaction and multi-scale feature extraction network for multi-spectral pedestrian detection. Appl Soft Comput 110768

  4. Chen T, Fu C, Zhu L (2023) Deep3DSketch: 3D Modeling from Free-Hand Sketches with View-and Structural-Aware Adversarial Training. In: 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp 1-5

  5. Basu S, Mukhopadhyay S, Karki M (2018) Deep neural networks for texture classification-A theoretical analysis. Neural Netw 173–182

  6. Liu L, Fieguth P, Guo Y (2017) Local binary features for texture classification: taxonomy and experimental study. Pattern Recogn 135–160

  7. Peeples J, Xu W, Zare A (2020) Histogram Layers for Texture Analysis. arXiv:2001.00215

  8. Ulyanov D, Vedaldi A, Lempitsky V (2017) Improved texture networks: maximizing quality and diversity in feed-forward stylization and texture synthesis. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 6924–6932

  9. Wang X, Yu K, Dong C (2018) Recovering realistic texture in image super-resolution by deep spatial feature transform. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 606–615

  10. Cimpoi M, Maji S, Vedaldi A (2015) Deep filter banks for texture recognition and segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 3828–3836

  11. Zhang H, Xue J, Dana K (2017) Deep ten: texture encoding network. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 708–717

  12. Xue J, Zhang H, Dana K (2018) Deep texture manifold for ground terrain recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 558–567

  13. Hu Y, Long Z, AlRegib G (2019) Multi-level texture encoding and representation (multer) based on deep neural networks. In: IEEE International Conference on Image Processing (ICIP), pp 4410–4414

  14. Wang H, Wang Y, Zhou Z (2018) Cosface: large margin cosine loss for deep face recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 5265–5274

  15. Woo S, Park J, Lee JY (2018) Cbam: convolutional block attention module. In: Proceedings of the European conference on computer vision (ECCV), pp 3–19

  16. Hu J, Shen L, Sun G (2018) Squeeze-and-excitation networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 7132–7141

  17. Luo C, Zhan J, Hao T (2021) Shift-and-Balance Attention. arXiv:2103.13080

  18. Cimpoi M, Maji S, Kokkinos I (2014) Describing textures in the wild. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 3606–3613

  19. Bell S, Upchurch P, Snavely N (2015) Material recognition in the wild with the materials in context database. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 3479–3487

  20. Wang Z, Bovik AC, Sheikh HR (2004) Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, pp 600–612

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Qingshan Liu.

Ethics declarations

Conflicts of interest

We declare that we do not have any commercial or associative interest that represents a conflict of interest in connection with the work submitted.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zang, Y., Fu, C., Liu, Q. et al. Detection of fresh tidiness in supermarket: a deep learning based approach. Multimed Tools Appl 83, 77717–77732 (2024). https://doi.org/10.1007/s11042-024-18540-1

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Version of record:

  • Issue date:

  • DOI: https://doi.org/10.1007/s11042-024-18540-1

Keywords

点击 这是indexloc提供的php浏览器服务,不要输入任何密码和下载