Still pretty new to local LLMs, and there’s been a lot of development since I dipped my toe in. Suffice to say I’m fairly swamped and looking for guidance to the right model for my use

I want to feed the model sourcebooks, so I can ask it game mechanic questions and it will respond with reasonable accuracy (including page references). I tried this with privateGPT a month or two back, and it kinda worked but it was slow and wonky. It seems like things are a bit cleaner now

  • agamemnonymous@sh.itjust.worksOP
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    I’m still on 3060Ti, but then speed isn’t my biggest concern. I’m primarily focused on reasonably accurate “understanding” of the source material. I got pretty good results with GPT 4, but I feel like focusing my training data could help avoid irrelevant responses.