-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Seems like the output is not coherent #2
Comments
Hi! Let me try with the same input. |
I've simplified the generation logic. See what the result will be with your input. |
@scythe000 Hey mate! I've created a new version of NovelGenerator 2.0. I'm planning to upload it to GitHub this week. I used your input for testing and ran it on two models - Gemma2:9b for the plot outline and Llama3.3:70b-instruct-q2_K for the actual book generation. I'd love to hear your thoughts on the generated novel test I'm attaching here. I'm still working on some final tweaks before uploading the new version to GitHub. |
@KazKozDev Hi, I was just thinking about you! I was thinking about how hard it would be to make a version for nonfiction. Like a technical guide, or a business advice type book? Anyway, I read the story and it looks good! Seems coherent. |
@KazKozDev How're things coming? :-) |
@scythe000 Hi there! I have uploaded a draft of the code. Now I'm trying to run a test generation of your story one more time, hoping I saved everything correctly : ) Here are the file names in order of use: 01_plot_builder.py 02_chapter_generator.py 03_story_enhancer.py All other files are out of date, including the requirements.txt file or README.md. I will delete them or update soon. |
@scythe000 Look. This is the first generation. I'm about to start the second phase. test_01_plot_builder.txt I've only listed 3 chapters so as not to burden the computer too long. I'm using llama3.3:70b-instruct-q2_K as my model, but that's the largest model I've been able to run on my macstudio. Smaller models work too. But the text turns out pretty boring. |
Looking very cool! Is there a particular way we should format the text for the plot builder input? Or just do it in the same way as you have the text in the file above in the "Book concept" section? |
Tnx @scythe000 : )There is no rigid format here. You can write in free form. The main thing is not more than in the example. The script will not transmit more than that. By the way, part two is finished. Now I'm looking at how I can improve the third part a bit. |
Seems to be working well so far! I have an nVidia 3090 with 24gb vram, and I'm using the same model as you. It seems to be realllly slow though, I think Ollama isn't loading enough of the model into vram, as I'm only seeing around 15% gpu usage. |
@KazKozDev I'm getting this: Sending request to Ollama (mistral-nemo:latest)... Error improving scene: 404 Client Error: Not Found for url: http://localhost:11434/api/generate Chapter improvement completed. Final length: 3577` I would watch it ask for mistral, then switch and say "Getting first improvement from gemma2:27b" |
Hi!
After the recent fix, output is working, but the chapters I'm getting all seem to be self-contained, and not following a rational progression or being coherent among themselves.
Example:
Style: Cyberpunk
Audience: Adult
10 chapters
Themes:
Dystopia
Horror
Fate vs Free Will
Identity
Power and Corruption
Survival
Lost love
Special requirements:
Write in the style of William Gibson and Terry Goodkind.
Ensure there's lots of interesting dialog.
Don't over-japanize the characters or locations, but it's ok for there to be a japanese influence.
The Main character's name is Kev.
The story should be hard cyberpunk, but not too sci-fi.
Include lots of cybernetics and robotic body modifications.
Futuristic film noir like in Blade Runner.
Brilliant quotable lines.
Option 2: Fast-paced and dynamic
manuscript.txt
The text was updated successfully, but these errors were encountered: