Skip to content

TheSoftDiamond/Kazushin

 
 

Repository files navigation

Forks Stargazers Issues


Logo

Kazushin

Kazushin, a fork of this project, is a Twitch Chat Bot that reads chat and generates text-to-speech responses using OpenAI API and Google Cloud API. It comes with profanity detection, and more built-in.
Explore the docs »

View Demo · Report Bug · Request Feature

Table of Contents
  1. Getting Started
  2. Usage
  3. Roadmap
  4. Contributing
  5. License
  6. Acknowledgments

Getting Started

For a more comprehensive guide, check out the documentation

Prerequisites

In order to install the prerequisites, you will need to run the following command in a command line:

  • pip
    pip install -r requirements.txt

Installation

  1. Clone the repo or fork it
    git clone https://github.com/TheSoftDiamond/Kazushin.git
  2. Populate the creds.py file with your info
  3. Adjust the settings.py file as to your needs.
  4. Run main_usercontext.py

(back to top)

Setting up Ollama

Make sure you have downloaded Ollama.

Creating the model

  1. Create a Modelfile in your project, pointing it to your model gguf:

    FROM llama-2-7b.Q2_K.gguf
    

    You can download model data from sites such as HuggingFace

  2. Create the Ollama model from your existing template using poweshell/bash

    ❯ ollama create llama2 -f Modelfile
    transferring model data 
    creating model layer 
    using already created layer sha256:a630f354771cf25496e079a49656730858712315cc71aee4adf9b97aceb251f8 
    writing layer sha256:9d07cddc325f2abd269514a29cb3165eac0b06accd018a1b4da9982d6b986647 
    writing manifest 
    success
  3. Serve the Ollama instance

    ❯ ollama serve
    time=2024-07-10T20:20:46.364-04:00 level=INFO source=images.go:710 msg="total blobs: 0"
    time=2024-07-10T20:20:46.364-04:00 level=INFO source=images.go:717 msg="total unused blobs removed: 0"
    time=2024-07-10T20:20:46.364-04:00 level=INFO source=routes.go:1021 msg="Listening on 127.0.0.1:11434 (version 0.1.28)"
    time=2024-07-10T20:20:46.364-04:00 level=INFO source=payload_common.go:107 msg="Extracting dynamic libraries..."
    time=2024-07-10T20:20:47.967-04:00 level=INFO source=payload_common.go:146 msg="Dynamic LLM libraries [rocm_v5 cpu_avx2 cpu rocm_v6 cpu_avx cuda_v11]"
    time=2024-07-10T20:20:47.967-04:00 level=INFO source=gpu.go:94 msg="Detecting GPU type"
    time=2024-07-10T20:20:47.967-04:00 level=INFO source=gpu.go:265 msg="Searching for GPU management library libnvidia-ml.so"
    time=2024-07-10T20:20:47.980-04:00 level=INFO source=gpu.go:311 msg="Discovered GPU libraries: []"
    time=2024-07-10T20:20:47.980-04:00 level=INFO source=gpu.go:265 msg="Searching for GPU management library librocm_smi64.so"
    time=2024-07-10T20:20:47.980-04:00 level=INFO source=gpu.go:311 msg="Discovered GPU libraries: []"
    time=2024-07-10T20:20:47.980-04:00 level=INFO source=cpu_common.go:11 msg="CPU has AVX2"
    time=2024-07-10T20:20:47.980-04:00 level=INFO source=routes.go:1044 msg="no GPU detected"

    This is foreground process, you will need to have this as a service in order to utilize it as a daemon.

    For example, on Linux systems: systemctl enable --now ollama. This will enable Ollama on boot.

  4. In settings.py adjust the localAI_ModelName to match your model name.

    ### Local AI SETTINGS ###
    # Model Name
    localAI_ModelName = "llama2"

(back to top)

## Features * Separate conversations, prompts per user. * Profanity Filter * Detect Cheers, Keywords, andmore * Have the bot speak out loud or/and post messages to chat * [and many more!](https://docs.kazush.in/en/install/features)

Usage

See here

(back to top)

Contributing

Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.

If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

(back to top)

License

Distributed under the MIT License.

(back to top)

Acknowledgments

(back to top)