PromptCrypt is proposed as an interesting technique to secure LLMs in a recent paper. They mention the use of emojis, which is a great way to encrypt prompts from humans.
This Python script demonstrates how to integrate with the OpenAI API using a simple "encryption" and "decryption" mechanism for the prompts. It uses Base64 encoding as a simulated form of encryption to showcase how data might be manipulated before being sent to the API. This example specifically uses GPT-3.5, but feel free to switch between models.
![Screenshot 2024-02-11 at 9 00 17 PM](https://private-user-images.githubusercontent.com/77173537/303974320-01f6d58c-28bd-482b-ac9a-e443a1374e23.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3Mzk2MTI3MjMsIm5iZiI6MTczOTYxMjQyMywicGF0aCI6Ii83NzE3MzUzNy8zMDM5NzQzMjAtMDFmNmQ1OGMtMjhiZC00ODJiLWFjOWEtZTQ0M2ExMzc0ZTIzLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTAyMTUlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwMjE1VDA5NDAyM1omWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTc4MDVjNzQyMWIwMzNhYjQwZGU2ZjRhODVkZjQ2NjAwZDQ0ZTgzMzRlZGIxYWM2ZTM2ZjBjZTQ0ZjQzNjI3YTQmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.jj7t6J-rm6WeAX7qPRG3s4k3nkJKgjhE5oZvTH0qyyI)
- Python 3.x
requests
library
Ensure you have the necessary Python version and the requests
library installed. You can install requests
using pip if you haven't already:
pip install requests
Encryption/Decryption: The script simulates encryption and decryption using Base64 encoding and decoding, respectively.
Sending Encrypted Prompts: Encrypted prompts are decrypted back to plain text within the script before being sent to the OpenAI API.
API Request: A POST request is made to the OpenAI API with the decrypted prompt, and the response is processed and printed.
To use this script, you simply need to call the full_workflow function with a prompt as the argument