How are you finding the graceful shutdown, have you been testing it on servers you run? I imagine it's only useful where you have at least a few instances running behind a load balancer, as otherwise you can't easily switch from one binary on the same port to another. In principle though it sounds nice - makes it easier to have zero downtime deploys which even in-flight requests don't notice at all.
Cross posted from the HN discussion, here are some changes in this release:
The sort pkg now has a convenience for sorting slices, which will be a nice shortcut instead of having to define a special slice type just to sort on a given criteria, you can just pass a sorting function instead.
HTTP/2 Push is now in the server, which is fun, but like context might take a while for people to start using in earnest. Likewise graceful shutdown. Is anyone experimenting with this yet?
Plugins are here, but on Linux only for now - this will be interesting long term for things like server software which wants to let other compile plugins for it and distribute them separately, presently that has to be compiled in to the main binary.
Performance: GC times are now down to 10-100 microseconds, and defer and cgo are also faster, so incremental improvements.
GOPATH is optional now, but you still do need a path where all go code is kept, perhaps eventually this requirement will go away - GOPATH/pkg is just a cache, GOPATH/bin is just an install location, and GOPATH/src could really be anywhere, so I'm not sure if long term a special go directory is required at all if vendoring takes off, then import paths could be project-local.
Here is a slide deck with a rundown of all the changes from Dave Cheney.
Finally, as someone using Go for work, thanks to the Go team and everyone who contributed to this release. I really appreciate the incremental but significant changes in every release, and the stability of the core language.
This looks interesting, I wonder if erb was the inspiration for this? After having a look I do have a few hesitations about this library, mostly around their focus on speed as the selling point, but also because it removes the contextual escaping.
I find the charts a little misleading here, since their focus is on speed it's important to benchmark correctly. Firstly there is no context given as to what they do except the link to other benchmarks (parsing + eval or just eval time?), they are in microseconds, which is an incredibly small amount of time when considering web requests - typically responses are in 10s of milliseconds, so saving 0.03ms is not very useful over a request, and speed just is not a problem with the stdlib templates, so why focus on it? I think to be useful the benchmarks should include parsing say 40 templates, using all the functions of the parser (functions, methods etc), as at present it looks like a very trivial benchmark which is not measuring useful work. In a typical web request/response the different measured here simply wouldn't show up in the response times at all as it is so small. So perhaps speed is not the right attribute to focus on?
There are shortcomings to the html/template library though (loading templates is painful, the new system of blocks from 1.6 I don't find useful, would rather layout/partial) so it's good to see other libraries popping up which take a different approach, but I'd prefer to see them focus on that than yet more benchmarking. Hero looks very different from the stdlib templates, and would have to be used everywhere in an app, so it's a big commitment to switch to it, and after switching you couldn't switch back easily. It gets rid of the context-sensitive escaping the html/template does which I think is really useful and obviates a whole set of vulnerabilities most people aren't even aware of, and it puts a bit more logic in the templates. Personally I'm very happy that the stdlib templates have little logic in them and don't encourage you to add it.