My conclusion from all this seems to be that if you're running a likely scenario's (hello world is unlikely to be a production service) the benchmarks of frameworks showing how fast they are, are largely irrelevant. The second a database got involved the whole equation changed and I'd venture a guess that if a similar scenario were tried in which it had to query a 3rd service before it could respond you're get similar results as to the database setup.
However if you can answer everything directly from memory the Python-Japronto combination might prove interesting. I'm also curious how much of that standard deviation is caused by Gin vs. something that Go is doing. Would be interesting to rewrite that in Chi and FastHTTP too and see if you'd get similar results.
As interesting as the article is though, their conclusions section baffles me a bit. It seems to be disconnected from the rest of the article.
1. Where does the conclusion comes from that they should probably use Elixir for I/O intensive services?
HTTP is I/O for one and their simulation suggests Go achieves better throughput. If we're talking reading/writing files then there doesn't seem to be any data presented to support that conclusion.
2. I'm also failing to understand their general conclusion:
> In most scenarios, you can achieve the exact same end result with Python as you do with Go. With Python, you’ll probably be able to ship far faster, and have more developers available.
Looking at their benchmarks Go seems to achieve a much better result than Python in real world. So which "most scenarios" are we talking about then? The claims about being able to ship faster is one I find doubtful, especially based on how annoyingly complicated it can be to deploy a Python app because of how dependencies aren't packaged up. If this is about how easy it is to write Python, a lot of that is depending on those writing it. Having more developers available is possible but some kind of metric to back that up would be nice.