• itadakimasu@programming.dev
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    1 year ago

    I want to love Nim but during my trial run with it. It was a pain in the ass to get set up on my Mac in a way that I could use it easily ie as a repl for quick and dirty prototyping and learning

      • itadakimasu@programming.dev
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        No issue setting up Nim itself (and I realize my complaint is not fault to Nim itself) but it would be great if this complimentary jupyter kernel for Nim would work on MacOS… Hasn’t been maintained in a while: https://github.com/stisa/jupyternim/issues/38

        Would be very useful for my workflow as someone who wants to explore Nim for data science-y type tasks.

        Anyone know of an alternative Nim jupyter kernel?

        • janAkali@lemmy.one
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Try inim.

          It’s not perfect, but closest thing to repl.
          I use it all the time for very small experiments.
          Installation should be as easy as:

          nimble install inim
          
        • sotolf@programming.dev
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          I don’t think the jupyter kernel is made by the core developers of nim, it’s kind of weird to call the language pain in the ass because of one weird niche usecase being hard to set up :)

        • insomniac_lemon@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          Not sure about MacOS plus it’s not REPL (sorry if that is key), but have you tried faster compilers for prototyping? Such as TCC (Bellard’s Tiny C Compiler) as long as performance isn’t critical and if you don’t need multi-threading.

          Clang is also pretty good (for a simple benchmark, the result I got was that Clang with opt:size gives similar performance to plain GCC but with half the compile time). (nlvm is also a thing but I have no idea how that’d compare even to Nim compiling code via Clang)

          With some setup, maybe some form of hot code reloading?