• rufus@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    11 个月前

    I think it’s just a minor update to the instruct-tuned variant, with the same base model. But I haven’t tried it yet.

    What do you people think of their practice of just dropping new models? I thought adding the mystery around the new MoE model was alright. But I’m starting to find it mildly annoying that they always just publish the minimal amount of info and description.

    • noneabove1182@sh.itjust.worksOPM
      link
      fedilink
      English
      arrow-up
      3
      ·
      11 个月前

      It’s definitely a little odd… I’m glad they did any kind of official release for 0.2, but yeah information is sorely lacking and would be nice to have more, especially with how revolutionary the previous one was… is this incremental? Is it a huge change? Is it just more fine tuning? Did they start from scratch? We’ll never know 🤷‍♂️

      • rufus@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        11 个月前

        Hmmh. I’ve complained before how Mistral’s use of the word ‘open’ differs quite a bit from my understanding. I mean, I wouldn’t care if their models weren’t that good and had the impact they have… But I’d really like to know what they updated, what kind of datasets went into Mistral, which languages it speaks… And they’re definitely being ‘odd’. All the other big players publish the safety evaluations like on biases, truthfulness, … Just Mistral likes to be intransparent and not tell us anything.

        Relevant discussion (without answers): https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2/discussions/2