DEEP LEARNING-BASED IMAGE CAPTION GENERATION USING VGG16 AND LSTM

Authors

  • T Sree Lakshmi , Valaja Chaithanya, N Balavenkata Muni Author

Abstract

Deep learning algorithms have transformed computer vision jobs in recent years, allowing robots to analyze and grasp visual data with astounding accuracy. Image caption generation is one such task that has attracted a lot of attention; it entails automatically providing a natural language description of an image's content. This research investigates the use of Long Short-Term Memory (LSTM) networks and Convolutional Neural Networks (CNNs) to create an image caption generator.

The system learns to produce informative captions for photos by utilizing the power of deep learning, notably the LSTM for sequence generation and the VGG16 model for feature extraction. The work utilizes the Flickr8k dataset for training and evaluation, demonstrating the effectiveness of the proposed approach through model training, caption generation, and evaluation using BLEU score. Through this work, a deeper understanding of the synergy between CNNs and LSTMs in the context of image understanding and natural language processing is achieved, showcasing the potential for creating intelligent systems capable of understanding and describing visual content.

Downloads

Published

2024-02-20

Issue

Section

Articles