Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Seems like the output is not coherent #2

Open
scythe000 opened this issue Dec 7, 2024 · 11 comments
Open

Seems like the output is not coherent #2

scythe000 opened this issue Dec 7, 2024 · 11 comments

Comments

@scythe000
Copy link

scythe000 commented Dec 7, 2024

Hi!
After the recent fix, output is working, but the chapters I'm getting all seem to be self-contained, and not following a rational progression or being coherent among themselves.

Example:

Style: Cyberpunk
Audience: Adult
10 chapters
Themes:
Dystopia
Horror
Fate vs Free Will
Identity
Power and Corruption
Survival
Lost love

Special requirements:
Write in the style of William Gibson and Terry Goodkind.
Ensure there's lots of interesting dialog.
Don't over-japanize the characters or locations, but it's ok for there to be a japanese influence.
The Main character's name is Kev.
The story should be hard cyberpunk, but not too sci-fi.
Include lots of cybernetics and robotic body modifications.
Futuristic film noir like in Blade Runner.
Brilliant quotable lines.

Option 2: Fast-paced and dynamic

manuscript.txt

@KazKozDev
Copy link
Owner

Hi! Let me try with the same input.

@KazKozDev
Copy link
Owner

KazKozDev commented Dec 7, 2024

I've simplified the generation logic. See what the result will be with your input.

test.txt

@KazKozDev
Copy link
Owner

@scythe000 Hey mate! I've created a new version of NovelGenerator 2.0. I'm planning to upload it to GitHub this week. I used your input for testing and ran it on two models - Gemma2:9b for the plot outline and Llama3.3:70b-instruct-q2_K for the actual book generation. I'd love to hear your thoughts on the generated novel test I'm attaching here. I'm still working on some final tweaks before uploading the new version to GitHub.

The city of Neo Tokyo.txt

@scythe000
Copy link
Author

@KazKozDev Hi, I was just thinking about you! I was thinking about how hard it would be to make a version for nonfiction. Like a technical guide, or a business advice type book?

Anyway, I read the story and it looks good! Seems coherent.

@scythe000
Copy link
Author

@KazKozDev How're things coming? :-)

@KazKozDev
Copy link
Owner

KazKozDev commented Jan 22, 2025

@scythe000 Hi there! I have uploaded a draft of the code. Now I'm trying to run a test generation of your story one more time, hoping I saved everything correctly : ) Here are the file names in order of use:

01_plot_builder.py
Creates the plot structure and chapter plans

02_chapter_generator.py
Generates chapter text based on chapter plans

03_story_enhancer.py
Improves and finalizes the generated text

All other files are out of date, including the requirements.txt file or README.md. I will delete them or update soon.

@KazKozDev
Copy link
Owner

KazKozDev commented Jan 22, 2025

@scythe000 Look. This is the first generation. I'm about to start the second phase. test_01_plot_builder.txt I've only listed 3 chapters so as not to burden the computer too long. I'm using llama3.3:70b-instruct-q2_K as my model, but that's the largest model I've been able to run on my macstudio. Smaller models work too. But the text turns out pretty boring.

@scythe000
Copy link
Author

scythe000 commented Jan 22, 2025

Looking very cool! Is there a particular way we should format the text for the plot builder input? Or just do it in the same way as you have the text in the file above in the "Book concept" section?

@KazKozDev
Copy link
Owner

Tnx @scythe000 : )There is no rigid format here. You can write in free form. The main thing is not more than in the example. The script will not transmit more than that. By the way, part two is finished. Now I'm looking at how I can improve the third part a bit.

test_02_chapter_generator.txt

@scythe000
Copy link
Author

Seems to be working well so far! I have an nVidia 3090 with 24gb vram, and I'm using the same model as you. It seems to be realllly slow though, I think Ollama isn't loading enough of the model into vram, as I'm only seeing around 15% gpu usage.

@scythe000
Copy link
Author

@KazKozDev I'm getting this:
`Getting review from mistral-nemo:latest

Sending request to Ollama (mistral-nemo:latest)...
Error in Ollama request: 404 Client Error: Not Found for url: http://localhost:11434/api/generate

Error improving scene: 404 Client Error: Not Found for url: http://localhost:11434/api/generate
Scene 4 completed

Chapter improvement completed. Final length: 3577`

I would watch it ask for mistral, then switch and say "Getting first improvement from gemma2:27b"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants