This repository is part of a TFG project related to the evaluation of the capabilities of Large Language Models (LLM). This application serves as the web interface for the backend found HERE.
The advancement of Large Language Models (LLMs) and the potential of their applications have led to a growing interest for its the application to Model Driven Software Engineering. In this field, LLMs have started to be used for the automatic creation of software models given a natural language description of the domain to be modelled. This has led to the identification of a research niche focused on the evaluation of the modelling capabilities of LLMs.
This project has two main contributions. Firstly, the capabilities of different LLMs in class diagram generation tasks are studied and analysed. This first study is done in an exploratory way following the same process that was published in the following research paper for ChatGPT: ``On the assessment of generative AI in modeling tasks: an experience report with ChatGPT and UML'' by Javier Cámara, Javier Troya, Lola Burgueño, and Antonio Vallecillo, published in 2023 in the journal Software and Systems Modeling. These exhaustive tests try to observe, analyse and compare the capabilities, strengths and weaknesses of some of the language models that are most popular today.
Secondly, and based on the results obtained in the previous phase, this TFG provides the definition of a systematic and reproducible procedure for the future evaluation of the usefulness and applicability of language models.
To facilitate the application of this procedure, defined as a workflow, a web application has been developed that provides an accessible, intuitive and user-friendly interface. This application is designed to guide users through the procedure without the need to memorise each step, thus optimising the workflow.
- Node.JS version must be v20.12 or newer
Copy .env.example
and rename it to .env.development
or a proper env file name (See Astro documentation about .env files HERE), then modify the environment variables. An example of valid environment file might be:
BACKEND_API_URL='http://localhost:8080'
PUBLIC_BACKEND_API_URL='http://localhost:8080'
Consideration: When using WSL for development environment and having the Backend hosted on the Windows host (not Docker), you must be sure
BACKEND_API_URL
environment variable is set to your local host domain, as for exampleDESKTOP-AAAA.local
. You can get it by usingecho "$(hostname).local"
in WSL terminal.
You can run development mode by using:
npm run dev
To be able to run the project you only need to have installed Docker and the Backend.
Copy .env.example
and rename it to .env.production
or something with higher priority than .env.development
, then modify the environment variables. An example of valid environment file might be:
BACKEND_API_URL='http://host.docker.internal:8080'
PUBLIC_BACKEND_API_URL='http://localhost:8080'
Consideration: When backend is hosted locally, you must be sure
BACKEND_API_URL
environment variable is set tohost.docker.internal
to be able to connect the server side of the Fronted to the Backend.
After setting it up, you can use the provided Dockerfile to generate the needed image you will need to use the command:
docker build -t hermesanalyzer/frontend .
It will have generated a docker image with the tag hermesanalyzer/frontend
.
Now, you can run it by using the command (Use -d to detach and --rm to make the container to be removed after stopping it):
docker run -p 4321:4321 -t hermesanalyzer/frontend -it
This project uses Node.js >v20.12 and TypeScript.
Most important frameworks or dependencies:
- Astro - Agnostic web framework, zero JS by default.
- React - JavaScript framework for web interactivity.
- Tailwind - Utility-first CSS framework.
- Shadcn - Component library for building the interface.
See the list of contributors who participated in this project.
This project is licensed under the GPL-3.0 License - see the LICENSE file for details