Scamming Scammers using Large Language Models

Type: MA thesis

Status: running

Supervisors: Linda-Sophie Schneider, Julian Thomas (Chair of Applied Cryptography, FAU)

This Master Thesis is a cooperation with the Chair of Applied Cryptography.

Work description
In the digital age, scam emails have become a serious threat. These fraudulent emails aim to steal sensitive information or cause financial damage. This thesis aims to better understand the problem of scam emails and develop effective solutions to reduce their success. We will address several aspects, including the vulnerability of email addresses to scammers, the differentiation of scam emails from other dubious messages, the automation of responses through Large Language Models (LLMs), the detection of the usage of LLMs by the scammers, and the evaluation of the economic damage to the scammers based on the data obtained. We aim to strengthen the security of digital communication and help minimize the risks for users and organizations.

The following questions should be considered:

  • How can an email address be made vulnerable to scammers?
  • How can emails from scammers be distinguished from other dubious emails?
  • How can LLM responses be automated and customized?
  • How quickly do scammers recognize automated responses?
  • How can we accurately assess the extent of the economic harm caused by the scammer using our collected data?

 

Prerequisites
Prerequisites for this task include good knowledge of Deep Learning and IT Security, familiarity with Python and PyTorch, and the capability to work independently.

For your application, please send your transcript of record.