The EmailGPT service contains a prompt injection...
Moderate severity
Unreviewed
Published
Jun 5, 2024
to the GitHub Advisory Database
•
Updated Jun 5, 2024
Description
Published by the National Vulnerability Database
Jun 5, 2024
Published to the GitHub Advisory Database
Jun 5, 2024
Last updated
Jun 5, 2024
The EmailGPT service contains a prompt injection vulnerability. The service uses an API service that allows a malicious user to inject a direct prompt and take over the service logic. Attackers can exploit the issue by forcing the AI service to leak the standard hard-coded system prompts and/or execute unwanted prompts. When engaging with EmailGPT by submitting a malicious prompt that requests harmful information, the system will respond by providing the requested data. This vulnerability can be exploited by any individual with access to the service.
References