Wow this looks pretty dense, but there’s some really good detail here about all the stuff that goes on under the hood when you listen and serve, which I think remains a mystery to most web developers. Can’t wait to see the video.
Some of these claims are a bit over the top. How could it possibly be faster than bare metal? All in memory and cached when talking to external services perhaps? It just seems a little too good to be true. Perhaps one day we will all write services/functions that talk to each other over a bus like this though, it definitely feels to me like there is something in all this serverless hype.
They seem to have had a lot of problems caused by the monorepo - this article is mostly about ways they worked around that. It has led to entangled code, which means they want to run tests across the entire repo at once.
It does sound to me like it'd be better for them to have a monorepo for the stuff which is truly shared (the doge and vendor directories) , and separate repos per project, then some sort of indexing system to make sure everyone is aware of all the projects when starting a new one. This would stop devs importing from all over the tree, and let them have build times in seconds, not minutes. Yes they'd have a little more duplication, but that's not necessarily a bad thing. They'd gain a huge amount of isolation, ensure their services really were independent and not entangled.
The teams folder seems like a bad idea as it enforces bureaucracy which isn't really relevant to the code, and means you have to know which team something is in to find it, and the monorepo just lets people pick up really bad habits and depend on things they have no reason to depend on - it means a change in a small library somewhere which is perfectly valid on its own could have unintended cascading effects across all the tools, not necc. caught by current tests.
The tests and deployment situation sounds like a horrible one. This really isn't selling monorepos to me.