• Knusper@feddit.de
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    Yeah, it’s tricky. If a project is more complex than a single dev team can handle, then managers feel forced to makes decisions around complexity, because of that whole “software architecture resembles organization structure”.

    But they obviously don’t understand the whole complexity ahead of the project start, so making nuanced decisions is not possible. They’d have to arbitrarily pick an architecture sizing in between.
    And that makes it quite tempting to simply reach for the biggest hammer.

    • lysdexic@programming.devOPM
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      But they obviously don’t understand the whole complexity ahead of the project start, so making nuanced decisions is not possible. They’d have to arbitrarily pick an architecture sizing in between.

      The single most important decision is external interfaces, and establish service level agreements with clients.

      Once the external interface is set, managers have total control over what happens internally. If they choose to, they can repeatedly move back and forth peeling out and merging in microservices. That’s actually one of the main selling points of microservices: once an API gateway is in place, they are completely free to work independently on independent services and even their replacements.

      Microservices are first and foremost an organizational tool.

      • Knusper@feddit.de
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        Hmm, not sure, I fully understand your point.
        Yes, having those API gateways in place gives you flexibility in organization, but they cost developer velocity and therefore also money.

        So, if you won’t need the flexibility of those API gateways, then you shouldn’t build/maintain them. But that is obviously easier said thsn predicted, which is why misjudgements happen…

  • BellyPurpledGerbil@sh.itjust.works
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    1 year ago

    In any discussion or critique of microservices, some people very clearly have never worked on monoliths (or haven’t worked at enough companies to see the bad ones) and it shows. A poorly designed monolithic codebase is just as bad as a poorly designed microservice one. The problem isn’t the ideology.

    I began my career in the times when monoliths were king but virtualization was just around the corner. For sufficiently complex software systems, monoliths were such a nightmare. Merging new code into a monolith is like trying to merge lanes in a 16 wheeler. You’re not merging until everyone is out of the way or you accidentally knock other people off the road. Testing took forever because who knows how many places one code change affected, or how many teams have collectively merged conflicting code changes one after the other. Heaven forbid trying to snapshot the database. Or backing it up. The codebase was often so large that regression testing and deployments took 6 - 8 hours, which had to be done overnight because you can’t have the software go down during business hours. None of that is fixed decades later. It has the same problems. In fact the problems are worse in the modern age of reliability engineering. Nothing can ever go down.

    For single purpose apps like Instagram, I would be surprised if a monolithic structure was big enough to fail. It makes sense for smaller designs. For larger stuff, with multiple data sets and purposes like Amazon, it doesn’t make sense at all. Every company expects to grow as big as the top dogs, and so they do microservice architecture from the start to prevent the headaches of future migration work. But if they never get there it’s just service bloat.

    This whole architectural war is idiotic. Do what makes sense.