• Measuring performance is hard. Unfortunately it's easy to benchmark the wrong things, or benchmark tiny parts of their program rather than everything. That's the danger with using individual benchmarks which don't measure the time taken to perform a real-world action. Some really useful tools I've found for this are:

    • Benchmarking, as in this article - most useful for comparing the impact of a change on an isolated bit of code.

    • Measure the absolute impact, not % increases, e.g. a 100% increase from 0.0001ms 0.0002ms in response times to is unlikely to be important.

    • Profiling - check the call graph to find out which functions are called most often and take longest, particularly in hot paths, then work on those.

    • Flame Graphs - these make profiling tools easier to interpret: https://golangnews.com/stories/675

    • Real world use-cases - make sure your benchmarking covers real-world use, try to make them measure a whole request cycle for example, use real data.

    • Instrumenting request cycles - so add timings to important parts of your request cycle (for a web app).

    Here's a collection of stories on profiling

  • It might be interesting to create a regular stat report on the number and list of Go projects that use semVer tagging.

    Or add this info in package listings. This feedback may encourage people to do the shift.

    It doesn't have to be a shift. One could adopt it and use it concurrently with the old one to keep backward compatibility.

  • Yes that's a good idea, would be nice to see which packages use semver and encourage everyone to do so. I think probably having a standard package management tool (which is in the works) will push everyone to do so though.

  • Dupe and was posted with no title, so downvoting.

  • No mention of Go in the advert, but I assume this post requires Go experience?

  • This user has like 5 hiring posts all created at the same time. Are there any ground rules for consolidating them or something? I think that might make more sense.

  • Yes I would like to encourage participation but that was a bit excessive.

  • This is quite interesting. There's a bunch of things I've always wanted to automate about my systems that I never got around to b/c netlink isn't super fun to work with. The blog post is well written, provides some useful examples and explains some terminology that might be confusing otherwise.

    I like the package but I find myself disagreeing a bit about the "well documented" part. Though some functions are well documented, for a lot of them the documentation basically states the function signature. I also find the lack of examples a bit frustrating. Though there are two for the main package they don't really do anything very interesting.

    For example, I can imagine a common thing people might want to do with something that speaks netlink is get or change routes (ip route ...) or create, configure and destroy interfaces (ip addr, ip link ...). I would very much love to see a set of examples that show how common things you'd do with the ip utility could be implemented with this instead.

  • I don't see much that's compelling yet - it's supposed to be targeted at normal users - *Our target audience is personal users, families, or groups of friends* but as soon as you talk about keys etc most of them would switch off, so that's going to be a hard sell.

    Feels like they need a killer app which then drives adoption, because this sort of thing requires people to adopt it for something useful to make it thrive.

  • There's an announcement blog post about this here:


    There's some interesting discussion over on HN with some of the authors chiming in.

  • This is interesting - they're using a modified version of Go to try out their ideas.

  • As much as I like the content of the slides I had real trouble with the slides' readability, ironically enough. The content is really tightly packed with a smörgåsbord of different fonts, sizes and weights applied, arrows pointing at things and text uncomfortably flowing around images. Though the content is what matters in this case the presentation detracts from the value it has to offer, for me.

  • Yes definitely from the Ray Gun school of design, it probably made more sense with the talk. There are some useful links there but it is pretty noisy.

  • Very interesting

  • Very nice

  • Another approach to this is just to set up a test database and do full integration tests. Then you can unit test most methods, and do a full integration test of all layers for things like handlers that might want database access, and you won't need any mocks at all.

  • I've edited the title as the pkg description is somewhat hyperbolic.

  • How are you finding the graceful shutdown, have you been testing it on servers you run? I imagine it's only useful where you have at least a few instances running behind a load balancer, as otherwise you can't easily switch from one binary on the same port to another. In principle though it sounds nice - makes it easier to have zero downtime deploys which even in-flight requests don't notice at all.

  • Edit:

    Oh I see what you mean, yes it doesn't supports these things, it couldn't, for that level of gracefully shutdown you're describing we would need a different application on top of our servers, i.e a different process (if I understand correctly, you're talking about hot-reloading and graceful restart with the new source code changes(rebuild) without online clients to be affected or disconnected or losing state. Some programming languages like elixir can do that because they're working on a totally functional way, func input -> output without side affects to the rest of the application, golang can't do that, it has structs and pointers). This graceful shutdown makes the server wait to close any opening connections and it stops new from connecting until the server closed. On the `Shutdown` you can pass a context with a time limit(deadline on how much you're willing to wait in order to close the server) too. Before go 1.8 we did it with custom net.Listener and channels with a 'stop' or/and different conn states, it was awful, now is better.

    I find it very handy, it solves boilerplate code (although I think it has downside on performance for 'hello worlds' apps, authors said that they will probably 'fix' it on go 1.9).

  • Woot!

  • Cross posted from the HN discussion, here are some changes in this release:

    The sort pkg now has a convenience for sorting slices, which will be a nice shortcut instead of having to define a special slice type just to sort on a given criteria, you can just pass a sorting function instead.

    HTTP/2 Push is now in the server, which is fun, but like context might take a while for people to start using in earnest. Likewise graceful shutdown. Is anyone experimenting with this yet?

    Plugins are here, but on Linux only for now - this will be interesting long term for things like server software which wants to let other compile plugins for it and distribute them separately, presently that has to be compiled in to the main binary.

    Performance: GC times are now down to 10-100 microseconds, and defer and cgo are also faster, so incremental improvements.

    GOPATH is optional now, but you still do need a path where all go code is kept, perhaps eventually this requirement will go away - GOPATH/pkg is just a cache, GOPATH/bin is just an install location, and GOPATH/src could really be anywhere, so I'm not sure if long term a special go directory is required at all if vendoring takes off, then import paths could be project-local.

    Here is a slide deck with a rundown of all the changes from Dave Cheney.

    Finally, as someone using Go for work, thanks to the Go team and everyone who contributed to this release. I really appreciate the incremental but significant changes in every release, and the stability of the core language.

  • Iris has already embraced the new Go 1.8's Server Push with an example too :P Good work Go authors it works so far!

  • There's also improvements to the SSA backend for other architectures, so some big speedups there.

  • Is this a good way to run apps on android, or is go mobile better?

  • There's an interesting thread on twitter from Brian Hatfield of graphs for various versions of Go using this to output stats here:


    Sub-millisecond pause time on an 18GB heap with 1.8RC1

    Some impressive improvements to the GC between 1.6 and 1.8 for large heaps.

  • Would love to see Golang News added as a data source too!

  • This is a fairly balanced overview - he uses both, but appreciates Go because it's a bit simpler. Scala sounds interesting and far more concise than Go, but I'd be worried about whether my future self would understand what I'd written 6 months ago.

  • This looks interesting, I wonder if erb was the inspiration for this? After having a look I do have a few hesitations about this library, mostly around their focus on speed as the selling point, but also because it removes the contextual escaping.

    I find the charts a little misleading here, since their focus is on speed it's important to benchmark correctly. Firstly there is no context given as to what they do except the link to other benchmarks (parsing + eval or just eval time?), they are in microseconds, which is an incredibly small amount of time when considering web requests - typically responses are in 10s of milliseconds, so saving 0.03ms is not very useful over a request, and speed just is not a problem with the stdlib templates, so why focus on it? I think to be useful the benchmarks should include parsing say 40 templates, using all the functions of the parser (functions, methods etc), as at present it looks like a very trivial benchmark which is not measuring useful work. In a typical web request/response the different measured here simply wouldn't show up in the response times at all as it is so small. So perhaps speed is not the right attribute to focus on?

    There are shortcomings to the html/template library though (loading templates is painful, the new system of blocks from 1.6 I don't find useful, would rather layout/partial) so it's good to see other libraries popping up which take a different approach, but I'd prefer to see them focus on that than yet more benchmarking. Hero looks very different from the stdlib templates, and would have to be used everywhere in an app, so it's a big commitment to switch to it, and after switching you couldn't switch back easily. It gets rid of the context-sensitive escaping the html/template does which I think is really useful and obviates a whole set of vulnerabilities most people aren't even aware of, and it puts a bit more logic in the templates. Personally I'm very happy that the stdlib templates have little logic in them and don't encourage you to add it.

  • This is a little old perhaps put the date in the title?

  • This is something I don't really agree with HN policy on. If things are still applicable I think it doesn't matter if they are from this week or last year and stories are interesting precisely because they are timeless, not timely.

    I am planning putting the original pub dates on stories if I can manage to grab it from metadata, but as this site doesn't include dupes I'd like to keep dates out of titles. I'd also like to surface old content to the home page though when a few people vote for it around the same time, need to look at ways of doing that.

  • What advantages over the HTML/template ones?

  • Seems to be sold on speed, but also there's a different syntax. It does unfortunately miss out contextual escaping, which I'd miss.

  • Thanks for posting, this looks really interesting, what are you using this for at the moment? I expected a hosted service like IFTTT, instead of having to run your own server, which takes a lot more effort/expense in the long run.

    Re your installation instructions, doesn't go get already build and install your app? Just this should be enough:

    go get -u github.com/muesli/beehive

    then you can run beehive --help. Install took quite a while on my machine, presumably because of dependencies, perhaps it would be worth using plugins eventually so that bees could be distributed that way?

  • I use it for home automation tasks and in our local hacker space. This is a self-hosted version of IFTTT by design, but thanks to Go it should be fairly easy to deploy & set up.

    The README actually mentions that all you need to do is "go get" - there are just some manual steps below for the intrigued.

    Due to the nature of having many little service plugins it indeed comes with a bag of dependencies. Once Go's plugin support is up to par, we'll certainly think about building bees as separate plugins. Since 1.8's plugins only work on Linux so far, I'm not in a rush, though :-)

  • I think what confused me is that the first line pulls down source and compiles it, then straight after you say to compile from source! Anyway thanks for the link, I'll be trying it out.

    A hosted version would be nice, even if only a demo, as it would let people try it out first.

  • This looks really interesting, thanks for posting. I'm off to run it against some of my apps now 🙂

  • I've edited the title - please don't use very long story titles, you can se the description field for that.

  • Bit of an awkward start, but it gets better.

  • There's a good discussion on this over on HN: https://news.ycombinator.com/item?id=13612941

  • This is an old article, but a nice overview of writing a python library in Go.

  • I know it is a bit old, but interesting!

  • Not a lot here just now.

    Maybe it'll grow to something larger like gobyexample?

  • Good article.

  • Wow, that was a deep dive, not what I expected at all, but really interesting, thanks.

  • Not sure about this one - seems to be a lot of logging and testing packages there, but that's just because everyone does those things, and many people reach for a library to help them test after coming from other languages which required it. Personally I find the built-in testing absolutely fine with table driven tests, sub-tests, I don't feel the need for things like assertions (that's what if is for !).

    The logging is more interesting - perhaps a case there for structured logging in the standard library.

  • ... Some days I wonder if kenny is a bot. Super fast at posting things, but he always responds to comments so if he's a bot he is a good one :)

    Author here btw - happy to discuss the article. I suspect this is one that will have differing opinions, but I tried to be as unbiased as possible in the post. and present all of the alternatives I could think of.

  • How do you use contexts? I see a lot of frameworks using custom contexts (like buffalo).

  • How do I personally? I tend to use the approach at the end of my article. I don't *always* start with the two functions, but as my app grows I have started to add them in.

    Large custom contexts are okay, but they cant follow the http.Handler interface and when they get too big it is hard to tell which pieces are needed for each handler. In the case of Buffalo I think everything is always set, but they also don't add any app specific data to that context (like a user) so you would need to customize their already large context interface to do that. I really need to get a follow-up piece written but I'm strapped for time this weekend :/

  • Not a bot, I promise, saw this on twitter! It looked interesting at first glance, but now that I've had a chance to read it, this is a great overview of different approaches to using request contexts and the pitfalls of each.

    You shouldn't store a logger there if it isn't created specifically to be scoped to this request, and likewise you shouldn't store a generic database connection in a context value.

    I'd go further than this and say you should never store a logger or a db connection on a request - why would you want a request scoped logger, simply to log the request id? I don't see anything wrong with attaching a trace to the context in middleware which is then used by inner functions - if you are using a middleware chain and use it only for functions which are applied universally to all handlers. I really like the idea of typed Getters and Setters for context though, this makes it a lot cleaner, and also lets you put those in the same package which mutates the context (say your logging package for example).

  • The most common use case I have seen is basically ensuring that specific data is attached to *all* logs, while not delegating that work to your other code. Here is a very crude example of a logger that does this some: https://play.golang.org/p/nMHpXAAVc8

    If that logger were used throughout your application, your handlers wouldn't need to concern themselves with things like the requestID or the User ID when logging. That logic is isolated to the Logger type.

    The DB is similar - you don't ever attach a DB object to a context, but you could make an argument for creating a transaction and attaching that, in which case you can rest assured that *everything* you do in that request will be contained within a single transaction. I don't really do this often, but I could see it being a reasonable decision to make.

    The key here is that in both of these cases we are attaching something that was created specifically for this request, and we destroy it once the request's lifecycle ends. That is, in my humble opinion, within the spirit of what context values were intended to be used for.

  • I think a better approach than this is simply to use context.Value to store a request id. The logger or handlers can then extract this and use it as required to trace execution. The user again is orthogonal to logging/loggers the and I believe should be attached as a separate request specific value. The logger should not know about user except as a value passed in from handlers to log (if handlers wish).

    I'm not so keen on creating a separate struct and bundling all this information together, it ties together things which should be separate, and takes away control of what is logged from the handlers.

    Thanks again for the thought provoking article and discussion, please do keep them coming - I will try to let you post your own articles from now on!

  • Very glad this library existed when I ran a giveaway for my book - I completely zoned to the fact that twitter limits the number of retweets you can track to the most recent 20 in the UI, and the most recent 100 in the API, so I had to scramble to write some code to keep track of this by polling the API every few hours and logging all users who had retweeted a post.

    Anaconda made it dead simple, and I had a program running within 15minutes to save my butt. I wouldn't recommend kicking off a giveaway without handling the retweet tracking portion first, but I can def recommend anaconda.

  • There's a collection of tweets from the conference in this moment on twitter.

  • Slice sorting is in there at last! https://github.com/golang/go/issues/16721

    But what does go bug do, it's not really explained in the slides?

  • The slide says: "Easier way to create bugs including all relevant information." So I'd say this allows you to raise a bug with the Golang project for an issue you encounter.

  • Perhaps the slide is missing something? Below that it says Example but there is just a line saying go bug? There's a bit more detail on this article, sounds like it might output some details about your system. The video is now up, so I've posted that here: https://golangnews.com/stories/1682-video-the-state-of-go-2017-fosdem

  • An interesting comment on the crypto section over on HN by someone who knows this area:


    So this is not a good guide on cryptography; the examples given in this book are somewhat out of date and are more academic examples to illustrate a point than something you should use as a guide. There's a good link on Go crypto here from someone at coreos:


    The book is also quite old (2012) and written by someone more familiar with Java, so it should not be taken as a style guide or example implementation but is a good overview of the areas you'd need to cover in writing network services with Go.

  • Is this out of date or does most of it still hold, the preamble says 2012 which is quite a while ago?

  • Parts are out of date and it is certainly not a definitive reference, more an overview of the topic, but still interesting. The author is more familiar with Java than Go according to the intro so bear that in mind too.

  • Once upon a time, this was a 'solved problem in computer science', and the solution strength-reduced to O(n log n), somewhat down from NP-complete. Glibc has bits of it in Linux, Solaris depended on it, and it was first thoroughly discussed on Multics. I'll try to do the archeology and write it up in my Copious Spare Time.

  • heh, great to see these screencasts with a real game being built, rather than abstract examples. Keep at it francesc!

  • I love this! There's so many Go projects that barely have a description, let alone a usage example, in their README's. GoDoc's great but to get a feel for what a library does and how to get started browsing through that isn't that great of an experience.

  • Yes you can do this with GitHub when you set up a repo (they let you choose a license and a readme, but this tool guides you through making a really useful readme file with all the sections you'd need. Will definitely be checking it out.

  • I don't agree with all of these but it is an important topic, and I think they are better than the other recommendation - Standard Package Layout I've seen on this.

    There are a few areas that I'm not sure about here, because the author doesn't explain why she thinks they are a bad idea, and a lot of this is quite subjective, I'm not even sure we need to be so prescriptive about one way to write go code (for example on singulars and plurals, or using src in your paths, why not if you have a good reason for it?). A few examples of areas I disagree with:

    • Why is having a src folder a bad idea? I use this to isolate my code from other stuff like db artefacts, themes, public assets, uploaded files.
    • I quite like package plural and struct singular for resources like users, because most of the package level funcs are actually about users plural

    I do like the pattern for a package per top-level resource that your app deals with as it makes it easy to find where, particularly when partnered with a restish url scheme with resources under a url which reflects their name. I really don't mind the stutter this causes and don't think it's a big problem, as you rarely need users.User - most of the time you'll be doing users.New() or users.Find(), and never see the possible stutter. I much prefer this approach to lumping all these different models (which are often the most important part of the domain in CRUD apps) into one huge main package (as recommended by the Standard Package Layout link above) - to me that's very similar to just having a models package - the effects are the same, there is no separation between models, and your main package will quickly grow to be unmanageable for any sizeable application.

    Overall, a great set of guidelines though.

  • Hello!

    @Kenny, if you allow me to say that you have 'right' on some points. I believe that we should have some 'common rules' about package naming but it's totally personal decision which depends on the needs of the package or the app you developing.

    I do agree with a large number of the author's points, not all of them, we can't say explicit 'DONT USE THAT', they are just recommendation, nothing more, right?

    If I may, let me put some words from my experience on very big packages with many dependencies, believe me, I tried a lot of designs to solve my issues and I believe that the 'big' question is NOT 'where' to put the packages because developers ,eventually, soon or later can understand where to put the packages, it's a matter of app architecture and we have many examples to learn from if we are not sure 'where' to put a package.

    At my opinion the real question when designing an api's language(package naming, structs, funcs and so on) with golang(and general) is the 'singular' vs 'plural'. I know it can sound like a 'slight detail' but we are all come to this moment when we are thinking 'is it right to keep it plural but is it right to rename it singular? and reverse'.

    So I came up with one 'solution':

    - When all structs inside a package have a 'common interface', they are all tasks or adaptors or drivers for example, and they do the same thing or trying to solve a specific thing with the same 'policies' and they are NOT depend between their selves (a task1 struct cannot depend on task2 struct) and the client/user is allowed to use more than one of these, then the package name 'should' be plural (: 'tasks', 'adaptors' ,'drivers'). The rest of the packages 'should' be named 'singular'.


    - A collection of the same object and the object itself 'should' be inside a 'singular' package name, for example user collection shouldn't be named UsersCollection and it 'should' be inside a 'user' package, so we can name it like 'Collection' and refer to it like 'user.Collection' which will return a user collection ( or a collection of users, point of perspective).

    - @Kenny, at your example you use 'users.Find/New` I think (my personal opinion, nothing more, nothing less) that it will be better to be named as 'user.Find -> find a User`, `user.New -> create new User` , `user.SelectMany(procedure func(...)) -> collect and return a collection of User` but as we told, it depends. For example here: https://github.com/kennygrant/gohackernews/blob/master/src/comments/actions/create.go#L24 I see 'view.New' and 'comments.New', 'view' is coming from 'fragmenta' and 'comments' from the app itself, isn't it confusing to have different style for the same thing (:create and return a Single object)?

    - Plural package naming when the client(user) is able/can use more than one 'features/structs' from the package for the same scope, example name: 'plugins'. This may looks like conflict with the first of my recommendations, but it's not(think it).

    - As the author says, and I agree, we should avoid using so generic package names but sometimes we need to or we don't care because they aren't used by the client, so things like utilities and commons 'should' be named singular (:'util', 'common')

    - Prefer to use 'user' instead of 'models/user, services/user', so you can refer 'user.Service, user.User', yes 'user.User' seems 'bad' name but it's expected name from client. But.. wait, so we can't have a package named 'services' ? No, no we still can (if 'services' example name is confusing you can think other name, read below and you'll understand what I am 'trying' to explain) but there we 'should' put services that should work as 'middleware' or 'adaptor' between/for all other 'services-like' structs inside parent packages (:'user.Service', 'post.Service') and they, of course, should be not depend on these structs('user.Service', 'post.Service').

    - Ok, what about a service which should work with both 'user' and 'post', where we 'should' to put that? The answer depends on your schema, if the user is the parent of/owns the 'post' then I recommend putting that in the 'user' package, but avoid that too: try to minimize the dependencies inside your app, so we can create a totally new package which will 'bridge' the 'user' package's features/responsibilities with the 'post' package.

    Final notes:

    - Try to design your internal application like you design an open-source LIBRARY for other developers.

    - Think the 'client/user' side before start writing the 'server' code(: think how you would prefer to call that package's struct/function)

    Again, my personal opinions. Sorry for my writing style, I'm not a native english speaker, if I didn't make things clear for something or if you thing that I 'miss' the point, please leave a comment below!



  • Thanks for the thoughtful comment. I don't have a really strong opinion on the plural/singular thing, but I use plural for resources because it matches the urls and database tables I use (so path /users/1, table users etc). I don't feel this is a hugely important point though, and frankly don't think there should be a prescription either way. The important thing is to be consistent in the way that you use these things within a given app, and I think you have found a few more exceptions here to the rule that packages should be singular, I'm really not sure that there has to be a choice between one or the other as the article pretends.

    Re user Collections, I typically just work with slices rather than a custom collection type, so I'd be doing users.FindAll -> []*Users and users.Find -> *User and users.New -> *User, but yes what you suggest sounds reasonable - again I don't think there has to be a hard and fast rule here because different projects are structured differently.

    Re a service which works with both user and post, I find most of my handlers end up pulling in several resources, so I prefer to have handlers or other services in separate packages from resources, which solves this problem -I don't think resources should know all about other resources, if you start down that path, every resource has to know about every other and all your code becomes difficult to reason about and intertwined.

    Re the view.New you mention, I've actually changed this recently to view.NewRenderer as I think it makes it clearer what it does. comments.New I'm reasonably happy with - I think there's a convention in Go now that mypackage.New will return an instance of the primary type for that package, but equally given a different app/project I see nothing wrong with comment.Collection if used consistently.

    PS Re use of english, it would make your writing a lot more legible to simply omit your use of quotes - they are rarely appropriate unless directly quoting someone else, though they are often misused in this way (e.g. I 'miss' the point). I wouldn't normally mention this but did so because of your last paragraph - your English is actually very good, so no need to apologise.

  • Thanks for the answer, I agree with you each app has different structure, it depends on many things I just share my way of thinking on these things.

    As for my english, thanks for your nice words, but the quotes I used on 'miss' or 'wrong' they don't have the use of " some one told that phrase", I use them more about things that can be written with other words, so the 'wrong' it doesn't means that is wrong, but it means that I think it's like a wrong, I know it's not the best way and you're right I'll fix that on my future comments!

  • Do you ever run into issues with cyclical dependencies when doing per-resource packages? That was always my issue - there would occasionally creep up a situation where the two packages referenced one another and it stopped working. As a result, lumping them all together ended up suiting my needs better.

    I don't necessarily hate this approach and could work within an app designed this way with no qualms, but I don't think it should be a hard rule/style guideline because it could lead to confusion when it doesn't work.

  • Having tried a few different ways, I actually think it's a good thing to separate the resources and have come to value the discipline imposed by no cyclic dependencies between resources. The key is separating resources (managing state and logic about say users) and handlers (responding to http requests about users). So there are a few reasons you might want a resource to know about another one - so it can populate a list of joined resources (like tags on a page), so you can update other resources, so you can check something on another resource (like user permissions). While it is tempting to do this, I think it leads you down a path where all your resources are intimately tied together, which inhibits reuse, and more importantly makes it hard to know where a resource might have been mutated or what effect something like page.Update() will actually have.

    The position I've come to is that it's better to keep resources isolated, and then allow handlers to pull in resources as they need them and either show or modify them explicitly in the handler. So this means handlers need to be in a separate package (I use a subpackage of the resource package to keep it all together). This has a few advantages:

    • Resources are limited to only knowing about themselves, so you are sure there is no dependency or modification between resources.
    • Handlers can be grouped with the resources they act on, and typically will only act on those ones, though if necc. there will be exceptions (say update a page record if a page image changes).
    • Tying resources together happens only in handlers, so they choose how to connect and filter sub-resources.
    • Actions on resources like mutating state happens explicitly in handlers.

    The other objection to this is stutter, but I think if you typically use users.New() or users.Find() rather than users.User{} it's not a huge issue, and to me it's not a huge deal, other people might object more strongly to that. I think the advantages outweigh the disadvantages for the particular apps I'm making (web apps), but of course for different projects and people there are different trade-offs. I'm not convinced there is one-size fits all for package guidelines, but it is interesting to hear how other people structure their projects, particularly larger ones. Some problems with organisation only really show up when the project is large enough.

  • Interesting. I'll have to give this a try with the next app I make. Perhaps I'll be convinced to change my ways :)

  • Really fun project, thanks for posting. I've edited the title to make it a little clearer what this is about.

  • Thanks for the edit!

  • Hi Rijwan, did you attempt to add a comment here and had a problem posting?

  • Thanks for posting, I've changed the link to the original release notes as there is more detail there.

  • What is the performance like with just regexp?

  • It should be as fast as other muxers, but I'm not so sure if it will be with 10k URLs for example.

  • Have you tried benchmarking it? If you're checking a regexp against each url coming in, for every route in the list, that's going to be very slow if you add over 10 or so routes.

    You could either cache and/or use string prefix comparison to eliminate a lot of routes before actually evaluating the regexp.

  • Does anyone have any insights into those benchmarks? I'm always a bit weary of them since they're not often representative of real-world use cases. I'm also curious if anyone has any insight in what might be slowing down gRPC so much? Considering it's coming from Google I would expect that to be rather snappy.

  • I haven't used grpc, but benchmarks are notoriously difficult to get right and often end up measuring the wrong thing or being misleading. It does seem a little odd that the rps actually goes up with more concurrent clients.

  • NYC

  • Old but worth a read.

  • You can go get it now it seems. It would be more useful if the client was a client library so that you could make your server say a client and use it for storage/caching.

    What is the author using this for or is more just for exploring programming in go?

  • Hi , Thanks for your idea. I want to make it server client database.

    So client library will connect to Simorgh server.

    And no My friend. Its just for exploring database , network and Go programming.

    I'm not good in databases , networks authentications and etc...

    I need to learn it , so I started this project.

    Can you help me to improve it ?

  • Looks interesting but not go gettable as per instructions. Perhaps the client should be at the top level so that it's easy to install?

  • Yes , It's not completed , but after my last commit , Its a go gettable repository , if you go get it , simorgh server will be installed automatically.

    I will make a makefile for simorgh installation.

  • There's a good link on Go GC from the HN thread on this which talks about reasons you can see poor GC performance https://github.com/golang/go/issues/10958

  • A few things which are not quite right:

    > The biggest difference between the two languages is that compilation for the destination architecture has to be done on that same architecture.

    Go supports easy cross compilation, I'm not sure why people choose to compile on the destination hosts rather than just deploy a binary.

    > By contrast, Go has no way of tracking the execution of individual goroutines.

    As he goes on to say you can use channels for this, so it's a little disingenuous to claim that you can't track errors on goroutines.

    Really interesting comparison though. Thanks for the link.

  • Finally... I hope some gophers would learn something from this Man eventually. Use context.Context when you really need it and do NOT act that like a f. store.

  • Though I agree with what the author writes I'm a bit disappointed that only that's done is tear down this concept but not really provide good alternatives or examples of how it (c|sh)ould be done instead. Just passing in the logger as an argument is a possibility which is hinted at but there might be other options too that are worth exploring.

    The quote at the end "Loggers should be injected into dependencies. Full stop." is one possible way, but simply stating that with no examples would probably leave a lot of people wondering what that's all about. For completeness, the possible solution is in an early post of the author, here: https://dave.cheney.net/2017/01/23/the-package-level-logger-anti-pattern