• sosodev@lemmy.world
    link
    fedilink
    English
    arrow-up
    57
    ·
    11 months ago

    It sounds like the model is overfitting the training data. They say it scored 100% on the testing set of data which almost always indicates that the model has learned how to ace the training set but flops in the real world.

    I think we shouldn’t put much weight behind this news article. This is just more overblown hype for the sake of clicks.

    • LostXOR@kbin.social
      link
      fedilink
      arrow-up
      9
      ·
      11 months ago

      The article says they kept 15% of the data for testing, so it’s not overfitting. I’m still skeptical though.