There is anything wrong with having another UUID library. But I'm not sure why anyone would use this over https://github.com/google/uuid which was a port of pborman's https://github.com/pborman/uuid that Kubernete's was using (I'm guessing Google decided to own their own implementation instead of using Paul's).
One of the most useful data structures in computer science is the hash table. Many hash table implementations exist with varying properties, but in general, they offer fast lookups, adds, and deletes. Go provides a built-in map type that implements a hash table. Go map is an unordered collection of key-value pairs. They map keys to values. Go map keys are unique within the map while the values may or may not be the same.
This is a great summary of everything that is wrong with stack overflow.
I've noticed the same about stack overflow, in particular go questions. They are regularly downvoted, often quite innocuous questions - it seems certain people spend their days downvoting most of the questions which are posted, which is just discouraging if you're a newcomer. The tools and the culture are to blame in my opinion - there is too much emphasis on gardening and blocking requests, which leads to encourages personality types more inclined to nitpick and disagree with others than try to help them. Downvoting should carry a far higher cost IMO, and be restricted to getting spam and other undesirable content off the site (perhaps flagging is more useful than downvoting).
Building communities is hard, and what's required at the beginning (driving engagement at all costs) can lead to a toxic community in time, if the culture develops into a negative one where every nail that stands out is hammered down. This particular question has since been voted very high by well-meaning people, but the culture will remain after those day-visitors are gone. The meta section in particular breeds a certain culture of insiders who adhere to esoteric rules, which are long-divorced from their original intention (for example rules against asking about products, or asking for general advice).
Instead of the UI encouraging beginners to post questions with more hints (for example it could prompt if there is no code in the question, or no links), they are left to post what they think is a reasonable question and then be haranged by a small core of regulars who spend their time downvoting and closing questions.
Every time these kind of topics get brought up on Meta people get very defensive and shout “quality!” As if you need to be a dick to maintain “quality”. It’s a false dichotomy: you can have quality and be nice, but there is a complete unwillingness to even discus it.
Excelize is a library written in pure Go providing a set of functions that allow you to write to and read from XLSX files. Supports reading and writing XLSX file generated by Microsoft Excel™ 2007 and later. Supports saving a file without losing original charts of XLSX. This library needs Go version 1.8 or later.
This release makes the library more useful outside of the WebRTC space. Since we now have PSK and AES-CCM support you can use it to interface with many embedded/IoT devices. Before this release you'd have to rely on a library that would use CGO to wrap OpenSSL, but no more!
I really like this proposal. It is simple, minimal and addresses the main point of complaint which people have about error handling in go (verbosity).
I have no problem with encouraging errors to bubble up, as there is a clear flow of control and errors must be handled at the appropriate place in the chain (usually one or two levels up).
The one thing that I'd maybe change is to leave the working code alone and concentrating on the bits people actually don't like (the error handling boilerplate). So instead of a new try keyword, just have a new check keyword with similar semantics which just takes the error as an argument, and on non-nil error returns nil values and the error.
f, err := os.Open(filename) check(err)
That would mean far fewer changes to code, things would read very much as they do now, but a bit less verbose, and as it's a function no new keywords required.
Wow, impressive work. Thanks for posting this. For the curious there is more detail about this standard over here:
It seems to focus on small data, and can be trained for specific data which is really interesting (and potentially useful for things like image or html compression specifically) - sounds similar to brotli in that sense:
To solve this situation, Zstd offers a training mode, which can be used to tune the algorithm for a selected type of data. Training Zstandard is achieved by provide it with a few samples (one file per sample). The result of this training is stored in a file called "dictionary", which must be loaded before compression and decompression. Using this dictionary, the compression ratio achievable on small data improves dramatically.
How would you compare the two, brotli and zstandard? I see they have a table where zstd edges out brotli in speed, but brotli seems to have browser adoption.
An interesting perspective - I'm not sure I agree with some of these examples though, I think they run counter to the culture of Go, which is mostly based on simplicity. To take one example:
Wrap all primitives and Strings in classes
I don't think it's useful to wrap integers and strings in types, it's a level of indirection that's not required except in very extreme cases. Defining a type for productID would be a very odd thing to do in most cases I imagine, especially if it only has one or two methods. Why not have methods on the containing type instead which manipulate productID as required?
This comparison is a little thin. I'm not sure it's a compelling case for Go, or even if one could be made against these other languages - the advantages of Go as opposed to other languages are mostly down to the reduced possibilities, culture of simplicity and simple code rather than capability.
This looks really interesting - using Go to construct pipelines for processing biological data (DNA Sequences etc), a fascinating field which is seeing huge transformations as technology is applied to it. Thanks for posting, are you using sci-pipe at the moment?
We found Go to be an excellent choice for this type of tool, as we could make very good use of the built-in concurrency primitives, and also of course benefit from the general robustness, performance and the tooling around the language.
We have used SciPipe in our latest study at pharmb.io , on building machine learning models to predict hazard in drug molecules (see dx.doi.org/10.3389/fphar.2018.01256). I have since transitioned into industry for the moment, but plan on using SciPipe on some research projects on my own, and also my colleagues at pharmbio are interested in applying it on some upcoming projects. Hoping to see some more people play with / adopting it. :)
Btw, I should also mention that although SciPipe was created out of needs in bioinformatics and cheminformatics, I have found it to be a handy tools also for tasks that are very general in terms of data processing, especially since it allows an easy way to mix and match various GNU *nix commandline utilities with components written in Go. One example of this is how we wrote a small pipeline to download, unzip and parse a somewhat large XML dataset, using commandline tools combined with Go code for the actual XML parsing (See the code example at the bottom of this post: bionics.it/posts/parsing-drugbank-xml-or-any-large-xml-file-in-streaming-mode-in-go ).
This is a really interesting article as it points out some of the differences between Java and Go - doing a port of a large codebase will really show up which areas, though of course it will also lead to writing Java in Go to some extent. It would be interesting to know the reasons for the port, did they want to replace the java version, or is it to sit alongside it?
Shows how important testing is to be confident that a port like this is working, and how it can help guide you through a big project.
The query builder was interesting as I've taken a different approach to this. I like the chaining approach to building queries, but it's perfectly possible to do this while returning an error for the functions that matter (the ones which fetch the data). You just have to separate out the query building (which returns a query) from the error returning functions (which typically return results, error). Then the stateful error is not necessary.
Interesting note on getters and setters - I've found this refreshing coming from other languages where this is the norm (Java, Ruby, Obj-C) - it's more a cultural thing than a requirement, and I can count on one hand the number of times I've truly needed a getter or setter, in which case I can make the field private and write one. Getters and Setters seem like unnecessary encapsulation to me given the amount of ceremony required.
Function overloading - I can't say I miss this much, if functions become too verbose it's usually a sign there is something wrong and you're trying to do too much with too many variables. Another approach you can use if passing pointers not types is to allow a nil pointer.