This post lacks a bit of detail. I appreciate the code is there but it if you want to know how to use go-bindata with html/template you actually have to dig through that code and read the comments. I would've preferred a more extensive write-up.
Yes git is surprisingly elegant underneath, it's weird because the cli is pretty ugly and organic (lots of inconsistent options have grown over time), but the underpinnings are really elegant and simple without being simplistic.
The only problem I have with Go's regexp is that it's not very fast. In the slides you are dealing with the worst-case performance, but I guess the everyday performance suffers a little as a consequence of the Go approach (or perhaps it is just not very optimised yet?).
That, and also slightly larger footprint, as the machine exists in several states at the same time, and also the algorithm caches the NFA to convert it into a DFA [which I didn't include in my talk, cause it'll confuse beginners, who're the target audience here]
Go is just too different to how I think: when I approach a programming problem, I first think about the types and abstractions that will be useful; I think about statically enforcing behaviour; [..]
This kind of says it all to me though. I don't "approach programming problems", or at least that's not how I see building something to solve a real problem. I won't spend days philosophising about how to "best" solve this particular subset of the problem, but instead get something out that works and improve as we go along and where it actually matters for those using the product. I'm sure that can be done in Haskell, but it doesn't seem that's how the author thinks about things.
and I don’t worry about the cost of intermediary data structures, because that price is almost never paid in full.
Yet the author does, explicitly, worry about it. There's a whole paragraph about it and this tidbit in the conclusion. He relies on Haskell's lazy nature in order to not have to worry about it but refuses to rely on Go's compiler and garbage collector for similar intelligence (i.e it not mattering) and therefor decides not to split up a function, which he would've done in Haskell. And there's also no proof that the intermediary structure would've mattered at all to the end user. To me this comes right back down to "approaching programming" over building solutions.
Yes it is quite a partial approach. It is still quite good to hear what others coming from a completely different background think about Go though. If nothing else it's instructive about the differences between a functional and imperative mindset. Go is definitely in the imperative camp.
It does seem haskellers spend more time pontificating about the purity and expressiveness of the language than building real projects with it. There is a downside to pure abstractions and brevity - it can lead to code which is so algebraic as to be inscrutable to almost everyone.
This is a really interesting perspective on go from the outside, obviously some of the tradeoffs chosen in Go are distasteful for the author and this is a matter of taste but things like option types are an interesting solution to the more verbose go error handling for example. So while I'd never want go to become Haskell, there are some ideas here worth considering.
Really nice to see some examples of building everyday software with Go. I really like this video series and that he's trying to focus on real world use of Go in areas where you don't traditionally see it used. According to his twitter the next post will be on something completely different - using the new context package.
Something I noticed watching this video is that Francesc adds a lock to his struct by including it as a field named mu. My practice is typically to add a mutex to my struct as an embedded (unnamed) type. Stylistically, I wonder, is there a reason to prefer one way to the other?
This is a nice article but doesn't mention flame graphs, which are a great visual way of seeing what is going on when you're trying to work out where you spend your time. Here's an example: https://golangnews.com/stories/675
It's exciting to see the recent progress that's been demonstrated using Go for games and 3D graphics. One needs look no further than the menagerie of JS and Python libraries to see that there is a market for software tools that let people write games without the development complexities of C/C++, even in spite of the substantial performance penalties those invoke. With Go's speed and ease of development, I'm optimistic that we could see a lot of growth in Go adoption in this area.
Yep, I'll definitely be interested to see how Go grows in sectors like gaming, which are traditionally dominated by C++. Rust might also be more appealing to game devs though, as it is more familiar for C++ developers. Most game software is still in C++.
Would it make sense just to load all the translations into memory? Seems weird that some are kept in memory in a cache, and others not. Also, that cache needs mutex protection as it'll be called from several goroutines.
I think what would be really useful here is to add a helperfunc for standard go templates, that the user can use to translate strings in their templates - that's likely to the be biggest use of this, and you're not going to want to put in calls to your handlers for every string in a template that might be translated, nor is there really any need.
I've edited the title a little as it was very long.
The link is pointing to the old repo in the Mondough org (from before they renamed to Monzo). Here's the current and source repository: https://github.com/monzo/typhon.
Unfortunately the README is void of any information and the godoc doesn't have usage examples either. Or a rationale for how this is better/different compared to other RPC frameworks. Or some benchmarks...
Thanks, fixed link. Looks like it is still under active development so I'm assuming it is what they're using. Re performance as long as it is adequate there are probably higher concerns for this sort of framework.
It's not a huge problem, it's not very prominent. No way to delete posts at present, they can be downvoted/flagged, but I think with the caveats above attached it's fine, it might even do some good as this pattern has spread to a lot of places, and it's good for people to be aware not to trust user input.
Yes it is far easier to isolate dependencies in a go app as once it is built it has pretty much zero dependencies, and you can build it locally for another platform. Still, this sort of thing is useful if you have to use other platforms like python too for work and it's also written in go, so worth a look I think.
I'm pretty happy with the standard library testing, an assertion is simply an if statement, and they're really building a DSL with things like: if assert.For(t).ThatActual(err).IsNil().Passed() - I'd rather just test the actual conditions I want explicitly.
Adding a testing library just means now the reader has 2 things to understand - your tests, and the testing library syntax.
I'm not convinced I find the first example more readable though:
I'm reading this as "assert that the check that err is nil passed". It certainly reads like a sentence but `if err == nil` seems much more idiomatic, is shorter, to me is also easier to write and conveys the same meaning.
That said, 10/10 for having an actually useful README with examples I can work from, and get a feel for what this is supposed to do and how it's supposed to work.
This is pretty crazy. Perhaps no more crazy than running desktop operating systems on servers with huge amounts of software which just isn't required for the tiny go services running on them. CoreOS is another similar idea (for servers), and has auto-update.
My conclusion from all this seems to be that if you're running a likely scenario's (hello world is unlikely to be a production service) the benchmarks of frameworks showing how fast they are, are largely irrelevant. The second a database got involved the whole equation changed and I'd venture a guess that if a similar scenario were tried in which it had to query a 3rd service before it could respond you're get similar results as to the database setup.
However if you can answer everything directly from memory the Python-Japronto combination might prove interesting. I'm also curious how much of that standard deviation is caused by Gin vs. something that Go is doing. Would be interesting to rewrite that in Chi and FastHTTP too and see if you'd get similar results.
As interesting as the article is though, their conclusions section baffles me a bit. It seems to be disconnected from the rest of the article.
1. Where does the conclusion comes from that they should probably use Elixir for I/O intensive services?
HTTP is I/O for one and their simulation suggests Go achieves better throughput. If we're talking reading/writing files then there doesn't seem to be any data presented to support that conclusion.
2. I'm also failing to understand their general conclusion:
> In most scenarios, you can achieve the exact same end result with Python as you do with Go. With Python, you’ll probably be able to ship far faster, and have more developers available.
Looking at their benchmarks Go seems to achieve a much better result than Python in real world. So which "most scenarios" are we talking about then? The claims about being able to ship faster is one I find doubtful, especially based on how annoyingly complicated it can be to deploy a Python app because of how dependencies aren't packaged up. If this is about how easy it is to write Python, a lot of that is depending on those writing it. Having more developers available is possible but some kind of metric to back that up would be nice.
The first tests looked interesting, but I have suspicions about the test which takes out most of the work. They say "we parse the request params and return a fixed number." That sounds like they're returning just an arbitrary number (unrelated to params), if the compiler notices no real work is being done, it might in some circumstances just remove all the code, so that you're benchmarking nothing, or it might not. Real services tend to need to do lots of work which is not reflected in benchmarks - auth, logging, dbs etc.
As usual benchmarks are incredibly hard to do fairly and very hard to interpret correctly, so they should at most be one datapoint when making decisions between languages/software, and should also be considered in the context of the actual performance required, the existing team, experience etc. They are sometimes good for eliminating options though if they are nowhere near performance requirements. I did find it interesting to compare the different graphs of response time, including outliers (people often ignore the worst case response times).
This was an interesting titbit, there's a need for more and better documentation and tutorials:
*"When asked about the biggest challenges to their own personal use of Go, users mentioned many of the technical changes suggested in the previous question. The most common themes in the non-technical challenges were convincing others to use Go and communicating the value of Go to others, including management. Another common theme was learning Go or helping others learn, including finding documentation like getting-started walkthroughs, tutorials, examples, and best practices....The documentation is not clear enough for beginners. It needs more examples and often assumes experience with other languages and various computer science topics.”*
This is confirmed by my recent experience. As a relatively experienced Go programmer in certain domains (lower-level networking; simulation and scientific computing) I recently started a project in a different area (interfacing with a web API) and was surprised at how difficult it was to find simple--and reliably correct--examples for things like the proper way to construct an HTTP PUT request or how to use the oauth2 library.
Yes definitely a lack of good trustworthy resources for specific topics. Perhaps this is something the Go team could look at building out on their website separately from the docs. There is so much in the standard library and I think a lot of it gets missed by people learning Go.
There is some great data in here, I love the detailed presentation of results rather than just presenting a summary. Some interesting questions and responses here:
What changes would most improve the Go documentation?
Collectively examples gets 26% and docs gets 6% - would be nice to see the data cleaned up a bit here, as there are multiple examples entries.
What Go libraries do you need that aren't available today?
Interesting to see such demand for UI support and mobile support - these are the two hardest parts to crack, given that cross platform UIs typically please no-one. I'd like to see the go team take a radically different approach here playing to Go's strengths - define some framework for presenting web UIs with a Go backend on Android and iOS.
What changes would improve Go most?
The G word makes an entrance as the top request. I'd prefer if they took a step back and prepared a Go 2.0 which removed some of the inevitable cruft (comments as directives, struct tags), and rethought some other features - mostly taking things out rather than adding things, though it would be nice also to see a solution to generics/better containers.
Everyone is reasonably happy with editing it seems.
And finally golangnews is somewhere in the middle as a news source for gophers - thanks to all the readers!
Splitting the write-in responses into individual keywords is a little confusing - it's weird for example to see words like 'great' without seeing the context they were used in (e.g. docs need a great deal of work, docs are great already etc).. It'd be great to see the data available from this (in anonymised form if necc.) so that others can also look at the responses.
> I'd like to see the go team take a radically different approach here playing to Go's strengths - define some framework for presenting web UIs with a Go backend on Android and iOS.
That would be cool, essentially server side rendering of the UI. A number of big players do this, Facebook for one but Spotify too, they've even open sourced the associated iOS framework: https://github.com/spotify/HubFramework. There's also Jasonette, http://jasonette.com, which is actually a lot of fun to experiment with. I've written a few iOS apps for home automation things thanks to Jasonette.
> I'd prefer if they took a step back and prepared a Go 2.0 which removed some of the inevitable cruft (comments as directives, struct tags), and rethought some other features - mostly taking things out rather than adding things, though it would be nice also to see a solution to generics/better containers.
What is it you don't like about struct tags? I've found those to be rather useful when converting between data representations. The comments as directives I assume you mean things like the build tags and generate directives? I like what they enable you to do but I'm not a fan of having those in comments indeed.
You can use limited html like blockquote in comments, WYSIWYG editor coming soon...
What is it you don't like about struct tags?
IMO they mix up separate concerns, use an ad-hoc syntax and end up a mess in complex apps, it's like a whole other language stuffed into a string and defined by random lib authors. A field might have json, xml, db tags all in one string.
They're pragmatic in a way, and in small doses useful, but I just think they're adding unstructured informal data attributes to structs and fields which is better specified formally in code. Take for example an xml field, it might over time have name,attr,chardata,innerxml,omitempty,cdata attributes added (and more in future), and then on top of that is piled any json representation and the db representation and whatever other dsl library authors choose to require in future.
I would prefer instead that types define functions to Marshal/Unmarshal, and that be the standard way to get data in and out of the type, it's clearer and extensible - when a new data format is required just write another function. So I avoid struct tags myself and use this approach for database mapping, and am much happier with it. By the time you're adding things like omitempty for xml it's just so much better if the code itself specifies this with a simple if condition in a function mapping from struct to xml.
The comments as directives I assume you mean things like the build tags and generate directives? I like what they enable you to do but I'm not a fan of having those in comments indeed.
Yes I think those are just in comments because they didn't want to change the language spec, but really something like generate should be clearly specified in the language rather than hidden in magic comments.
You can use limited html like blockquote in comments, WYSIWYG editor coming soon...
Ha, I had no idea. I've tried a few Markdown inspired things that usually work. Is there a doc somewhere that details what can and can't be used?
So I avoid struct tags myself and use this approach for database mapping, and am much happier with it.
Do you have an example of what that would end up looking like? I like the idea but it also sounds that you'd end up maintaining a lot more code if you don't use struct tags. How do you deal with JSON in this case? Do you keep using struct tags for those? And also, how do you deal with naming/remapping the fields in XML/JSON or db to the struct members?
I like the idea in general and especially once you mix in XML this gets unpleasant in struct tags. Just trying to get a feel for what this would look like and how it would work :).
I've always disliked gopath, not really sure why it exists?
I know they wanted somewhere to put the code to make the download tools easier, and I do really like go get, but per-project venturing of dependencies is just so much more sane. Perhaps it's because google has one huge mono-repo, it just seemed best to replicate that elsewhere too? I'm not even clear how this works inside google though if they have python projects and go projects, who gets to choose the folder structure where the code lives?
Yes, there is a function in the stdlib to do just this, so better to use that. In particular, I wouldn't do this: path := r.URL.Path[1:] - taking the path from user input without cleaning it at all is a bit dangerous, I think they have guards now in http.FileServer, but I would clean it as soon as you use it to prevent directory traversal, whatever method you're using. So I think the link you have provided is a far better starting point.
I tend not to downvote links like this though which are well-meaning but a little flawed, so have just left it at 1 - would rather keep downvotes for actual bad behaviour and spam.
I really like this set of challenges, they are just the right level of difficulty to exercise your go knowledge, without being so involved that you have to spend hours coming up with a solution. My only criticism would be I'd like to see several people contribute solutions before a deadline (of say a week), and then people can compare different approaches to the same problem.
This is a nice method when your unit testable stuff shares a PKG with things that use external resources, another way to handle it is to use separate packages for the two things (e.g. model (unit) and handlers (integration) ) but that's not always practical.
Or if you see a bug yes please log an issue on github. I have some changes queued up which haven't hit github or the website yet (need time to test them), so will be transitioning to a slightly different set of code soon, but contributions in the form of issues or bug fixes are welcome.
This is one of those days when I'm glad I don't depend on AWS (or GCS) at work. At a large enough scale it can be hard to avoid using other people's computers, but many websites really don't need to be distributed across timezones, and thus tolerant of faults except those introduced by your cloud provider. I have the same feelings about cloudflare, which is a brilliant service and has some fascinating people working there, but centralises a huge number of sites behind one point of failure. Then again almost everything on the internet relies on other people's infrastructure at this point - this site depends on Digital Ocean, and the CoreOS servers not being hacked (updates are applied automatically). So nobody can really gloat about this event and feel safe about their own infrastructure. Even the most reliable services go down.
Minio is a nice alternative to AWS though, and you can easily self-host. My biggest concern with services like AWS is not the potential for downtime but the lock-in. If you run your own code you can easily move it anywhere, if you depend on AWS semantics and silo your data in Amazon services, it's very difficult to move on if you ever need to.
I totally agree with you on this. This is the reason I posted this, we should protect new devs and organisations. We all seen lately (the last year even more) new developers are so enthusiastic about 'serverless' and they don't think it clearly. Yesterday(or today, it depends xD) was a good lesson for all of us.
Measuring performance is hard. Unfortunately it's easy to benchmark the wrong things, or benchmark tiny parts of their program rather than everything. That's the danger with using individual benchmarks which don't measure the time taken to perform a real-world action. Some really useful tools I've found for this are:
Benchmarking, as in this article - most useful for comparing the impact of a change on an isolated bit of code.
Measure the absolute impact, not % increases, e.g. a 100% increase from 0.0001ms 0.0002ms in response times to is unlikely to be important.
Profiling - check the call graph to find out which functions are called most often and take longest, particularly in hot paths, then work on those.
Yes that's a good idea, would be nice to see which packages use semver and encourage everyone to do so. I think probably having a standard package management tool (which is in the works) will push everyone to do so though.