Prompt Strategies for Sarcastic Meme Detection: A Comparative Analysis
Abstract
Memes, often characterized by subtle humour and irony, have become a prominent digital communication medium. Detecting sarcasm in memes presents a significant challenge due to its context-dependent nature, negatively impacting user experiences on social media platforms. To improve the ability of social media systems to recognize and manage sarcastic content, this study investigates the effectiveness of Large Language Models (LLMs) for sarcasm detection in memes. Specifically, we evaluate three prompting techniques: Standard Prompt, Chain of Thought (CoT), and Concise Chain of Thought (CCoT) to determine their impact on the classification of sarcastic memes. Using the GOAT dataset as a benchmark, the study employs four pre-trained LLMs: Flan-T5-XXL, Llama-2, Mistral 7B, and GPT-2. The research identifies the most effective prompting strategies for sarcasm detection through a comparative analysis. The results demonstrate that CoT and CCoT significantly enhance performance over the Standard Prompt, with CCoT achieving the highest accuracy, particularly with advanced models like Mistral 7B. However, the choice of prompting technique depends on both the model and task requirements, emphasizing the need for tailored approaches in sarcastic meme analysis.
Collections
- Computer Science & Engineering [2518 items ]

