A versatile and extendable web UI for FAL-AI, designed to seamlessly integrate with any FAL-AI API endpoint. This interface supports models, custom LoRAs, and more, offering an intuitive way to generate AI-based images using FAL-AI.
- Getting Started
- Environment Variables
- Running the Development Server
- Using the Application
- API Endpoints
- Learn More
- Deploy on Vercel
- Additional Notes
To get started with this project, you will need to clone this repository and install the required dependencies.
- Node.js (v14.x or later)
- NPM, Yarn, or PNPM package managers
- A FAL-AI API key (which you can obtain from fal.ai)
After cloning the repository, install the dependencies by running:
npm install
# or
yarn install
# or
pnpm install
# or
bun install
Create a .env.local
file in the root of the project and set the following variable with your own FAL-AI API key:
FAL_KEY={{your fal.ai API key here}}
You can obtain your API key by signing up at FAL-AI.
Once you've set up your API key, run the development server with one of the following commands:
npm run dev
# or
yarn dev
# or
pnpm dev
# or
bun dev
Open http://localhost:3000 with your browser to see the web UI in action.
The web UI allows you to input a prompt to generate images. You can tweak additional parameters like image size, number of inference steps, guidance scale, number of images, and more.
To generate an image:
- Enter a description in the Prompt field.
- Adjust the image settings (optional).
- Click Try this prompt → to generate the image.
The generated image will be displayed in the central panel once the process is complete.
You can select from different models to fine-tune your image generation results. The following models are available:
fal-ai/flux-lora
fal-ai/flux/dev
fal-ai/flux-realism
This project allows you to input a custom LoRA (Low-Rank Adaptation) URL, enabling further customization of the AI output. You can set the LoRA URL in the web form, and it will be used during the image generation process.
The right sidebar displays the history of generated images. Clicking on any of these images will load them back into the main display area.
This project includes API endpoints for generating images and fetching previously generated images.
POST /api/generateImage
This endpoint accepts the following parameters:
prompt
: The text prompt for the image.image_size
: The size of the generated image (e.g.,landscape_4_3
).num_inference_steps
: The number of steps for inference.guidance_scale
: The guidance scale for the image generation.num_images
: The number of images to generate.enable_safety_checker
: Boolean to enable/disable the safety checker.strength
: The strength of the generated image.output_format
: Format of the generated image (e.g.,jpeg
orpng
).sync_mode
: Whether to run in synchronous mode.model
: The model used for generating the image.loras
: Optional array of LoRAs to apply during image generation.
GET /api/getGeneratedImages
This endpoint returns a list of generated images from the outputs
directory.
To learn more about Next.js, take a look at the following resources:
- Next.js Documentation - Learn about Next.js features and API.
- Learn Next.js - An interactive Next.js tutorial.
You can check out the Next.js GitHub repository - your feedback and contributions are welcome!
The easiest way to deploy your Next.js app is to use the Vercel Platform from the creators of Next.js.
Check out the Next.js deployment documentation for more details.
- Make sure to use your own FAL-AI API key, which can be set in the
.env.local
file asFAL_KEY
. - Ensure the
outputs
directory exists in thepublic
folder for storing generated images.