This really resonates with me - the vast majority of websites/apis can easily run on one process on one server, and just upgrade to handle traffic if required. Often even the smallest server can handle a surprising amount of load when using go.
The only thing I'd probably change is to use a separate database server/service, as this makes it trivial to scale up to n servers if that is required later, so this approach even scales well if you do happen to hit scale if you keep the data separately (it can be shared between services). Of course you can start with the db on the same server and just migrate it later too, it's not hard to do.
The other factor is that often as you hit scale the business will change dramatically, and the previous requirements become actively damaging, so you may well have to rethink your architecture anyway, so the highest priority should be getting something working now which fits the requirements of the business now, not attempting to plan for a future which probably won't happen or a scale where you don't understand what the real problems will be yet.
However there is an incentive for developers to over-complicate things because it gives them experience which then makes them more valuable on the jobs market, and makes their job more interesting as they get to use modern technologies that everyone is talking about (kubernetes etc), and I suspect hidden motivations like that come into play a lot more than people admit when they're planning infrastructure. There are some notable exceptions to this of course - if you're building a bank, or a trading or advertising platform you know if it has any success a all it's going to scale and requires that built in to the initial design, but for 98% of the websites/servers in existence, one server could comfortably handle the load throughout the lifetime of the business.