One of the most useful data structures in computer science is the hash table. Many hash table implementations exist with varying properties, but in general, they offer fast lookups, adds, and deletes. Go provides a built-in map type that implements a hash table. Go map is an unordered collection of key-value pairs. They map keys to values. Go map keys are unique within the map while the values may or may not be the same.
This is a great summary of everything that is wrong with stack overflow.
I've noticed the same about stack overflow, in particular go questions. They are regularly downvoted, often quite innocuous questions - it seems certain people spend their days downvoting most of the questions which are posted, which is just discouraging if you're a newcomer. The tools and the culture are to blame in my opinion - there is too much emphasis on gardening and blocking requests, which leads to encourages personality types more inclined to nitpick and disagree with others than try to help them. Downvoting should carry a far higher cost IMO, and be restricted to getting spam and other undesirable content off the site (perhaps flagging is more useful than downvoting).
Building communities is hard, and what's required at the beginning (driving engagement at all costs) can lead to a toxic community in time, if the culture develops into a negative one where every nail that stands out is hammered down. This particular question has since been voted very high by well-meaning people, but the culture will remain after those day-visitors are gone. The meta section in particular breeds a certain culture of insiders who adhere to esoteric rules, which are long-divorced from their original intention (for example rules against asking about products, or asking for general advice).
Instead of the UI encouraging beginners to post questions with more hints (for example it could prompt if there is no code in the question, or no links), they are left to post what they think is a reasonable question and then be haranged by a small core of regulars who spend their time downvoting and closing questions.
Every time these kind of topics get brought up on Meta people get very defensive and shout “quality!” As if you need to be a dick to maintain “quality”. It’s a false dichotomy: you can have quality and be nice, but there is a complete unwillingness to even discus it.
Excelize is a library written in pure Go providing a set of functions that allow you to write to and read from XLSX files. Supports reading and writing XLSX file generated by Microsoft Excel™ 2007 and later. Supports saving a file without losing original charts of XLSX. This library needs Go version 1.8 or later.
This release makes the library more useful outside of the WebRTC space. Since we now have PSK and AES-CCM support you can use it to interface with many embedded/IoT devices. Before this release you'd have to rely on a library that would use CGO to wrap OpenSSL, but no more!
I really like this proposal. It is simple, minimal and addresses the main point of complaint which people have about error handling in go (verbosity).
I have no problem with encouraging errors to bubble up, as there is a clear flow of control and errors must be handled at the appropriate place in the chain (usually one or two levels up).
The one thing that I'd maybe change is to leave the working code alone and concentrating on the bits people actually don't like (the error handling boilerplate). So instead of a new try keyword, just have a new check keyword with similar semantics which just takes the error as an argument, and on non-nil error returns nil values and the error.
f, err := os.Open(filename) check(err)
That would mean far fewer changes to code, things would read very much as they do now, but a bit less verbose, and as it's a function no new keywords required.
Wow, impressive work. Thanks for posting this. For the curious there is more detail about this standard over here:
It seems to focus on small data, and can be trained for specific data which is really interesting (and potentially useful for things like image or html compression specifically) - sounds similar to brotli in that sense:
To solve this situation, Zstd offers a training mode, which can be used to tune the algorithm for a selected type of data. Training Zstandard is achieved by provide it with a few samples (one file per sample). The result of this training is stored in a file called "dictionary", which must be loaded before compression and decompression. Using this dictionary, the compression ratio achievable on small data improves dramatically.
How would you compare the two, brotli and zstandard? I see they have a table where zstd edges out brotli in speed, but brotli seems to have browser adoption.
An interesting perspective - I'm not sure I agree with some of these examples though, I think they run counter to the culture of Go, which is mostly based on simplicity. To take one example:
Wrap all primitives and Strings in classes
I don't think it's useful to wrap integers and strings in types, it's a level of indirection that's not required except in very extreme cases. Defining a type for productID would be a very odd thing to do in most cases I imagine, especially if it only has one or two methods. Why not have methods on the containing type instead which manipulate productID as required?
This comparison is a little thin. I'm not sure it's a compelling case for Go, or even if one could be made against these other languages - the advantages of Go as opposed to other languages are mostly down to the reduced possibilities, culture of simplicity and simple code rather than capability.
This looks really interesting - using Go to construct pipelines for processing biological data (DNA Sequences etc), a fascinating field which is seeing huge transformations as technology is applied to it. Thanks for posting, are you using sci-pipe at the moment?
We found Go to be an excellent choice for this type of tool, as we could make very good use of the built-in concurrency primitives, and also of course benefit from the general robustness, performance and the tooling around the language.
We have used SciPipe in our latest study at pharmb.io , on building machine learning models to predict hazard in drug molecules (see dx.doi.org/10.3389/fphar.2018.01256). I have since transitioned into industry for the moment, but plan on using SciPipe on some research projects on my own, and also my colleagues at pharmbio are interested in applying it on some upcoming projects. Hoping to see some more people play with / adopting it. :)
Btw, I should also mention that although SciPipe was created out of needs in bioinformatics and cheminformatics, I have found it to be a handy tools also for tasks that are very general in terms of data processing, especially since it allows an easy way to mix and match various GNU *nix commandline utilities with components written in Go. One example of this is how we wrote a small pipeline to download, unzip and parse a somewhat large XML dataset, using commandline tools combined with Go code for the actual XML parsing (See the code example at the bottom of this post: bionics.it/posts/parsing-drugbank-xml-or-any-large-xml-file-in-streaming-mode-in-go ).
This is a really interesting article as it points out some of the differences between Java and Go - doing a port of a large codebase will really show up which areas, though of course it will also lead to writing Java in Go to some extent. It would be interesting to know the reasons for the port, did they want to replace the java version, or is it to sit alongside it?
Shows how important testing is to be confident that a port like this is working, and how it can help guide you through a big project.
The query builder was interesting as I've taken a different approach to this. I like the chaining approach to building queries, but it's perfectly possible to do this while returning an error for the functions that matter (the ones which fetch the data). You just have to separate out the query building (which returns a query) from the error returning functions (which typically return results, error). Then the stateful error is not necessary.
Interesting note on getters and setters - I've found this refreshing coming from other languages where this is the norm (Java, Ruby, Obj-C) - it's more a cultural thing than a requirement, and I can count on one hand the number of times I've truly needed a getter or setter, in which case I can make the field private and write one. Getters and Setters seem like unnecessary encapsulation to me given the amount of ceremony required.
Function overloading - I can't say I miss this much, if functions become too verbose it's usually a sign there is something wrong and you're trying to do too much with too many variables. Another approach you can use if passing pointers not types is to allow a nil pointer.
Thanks for posting, I really like this series, even if I haven't used PHP much.
Often people have trouble adapting to a new language if they're really familiar with another one - the idioms, libraries and built ins are all so different, and it makes you realise a language is more than just a syntax, it's the culture, the libraries available etc.
Another area people might want isset is params coming in to a web endpoint, so yu might want to cover that in your examples.
Came here to submit it but have found this old story.
This is a really good summary and is similar to my reaction to the Go Generics proposal - nice to see it being added, but it feels a little obtuse in the current incarnation, and not as simple as other parts of the language. I'd rather see them explore extending interfaces rather than introducing an entirely new metalanguage to describe relations between types.
The article is not very good, as some of the points made are just wrong ( the whole part about VMs for example). It seems that this is just the result of a short internet search without much in the way of understanding.
I guess it is notes from someone without a great deal of experience with the language, so in that sense it is useful, but I agree it could do with a lot more depth, in particular if the author had built something significant in Go before writing it would help. I'm not sure it aspires to the level of an in-depth language critique though, it's more a light gloss on the language from the point of view of a beginner.
Re VMs, yes I'm not sure what that section is about, I guess they took the fact it doesn't use a VM then ran with that, but came to the wrong conclusions, as a quick comparison of Java programs with Go programs would tell them.
It's interesting that many of the cons can actually be considered pros if seen in the right light, for example 'too simplistic' is not how I see go, at all, it is certainly radically simple in that it jettisons a lot of features deliberately as they are harmful (foremost among them inheritance, which I think was the right call), but it also misses out things which are seen as features in some other languages (like generics or tail calls). I would like to see them tidy it up for Go 2 though, and make it simpler, rather than more complex. Simplistic is not the right word for the deliberate simplicity of Go IMO.
Seems to be a lot of drama around dep. I don't like the way that rsc handled that situation at all (he would have been better not pretending to consult, and not trying to bring dep into the fold as a semi-official experiment). Calling it an official experiment and holding meetings with Sam made it seem like there would be significant input from the dep team, but that never happened. So lessons to learn for everyone, but give the structure of the Go team and Russ's place in it, it seems unlikely dep has a significant future.