The MI25 is a great deal for hobbiest even if the power draw is high, but would it work with local models like falcon or llama?

I know they have a different memory bus size, but I’m unsure if this would fundamentally cause problems for open source models.

  • Atemu@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Before buying a GPU for this, evaluate models using your CPU and system memory first. The only difference would be speed but note that CPUs can be acceptably fast and that responses are the same.

  • noneabove1182@sh.itjust.worksM
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I guess it depends on what you mean by usable, I think people have had success with ROCM, it’s not as solid as CUDA of course but it’s been more than usable