Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Puma, a fast concurrent web server for Ruby (puma.io)
164 points by bretpiatt on Aug 27, 2013 | hide | past | favorite | 83 comments


Anyone used Puma on Heroku? Is it as simple as [1] or is there more configuration required [2] as is the case with Unicorn [3]?

[1] http://blog.steveklabnik.com/posts/2013-02-24-using-puma-on-...

[2] http://www.subelsky.com/2012/10/setting-up-puma-rails-on-her...

[3] https://devcenter.heroku.com/articles/rails-unicorn


I'm actually using it in a small side-project I implemented a couple of weeks ago. (http://playas.io) It uses Rails 4 [0], JRuby 1.7.4, postgresql with full text search, one dyno and 10 workers.

It is not the first time I'm using JRuby and Puma, I must say I'm pretty happy with it.

I had enough with the official documentation, just tweaking the DB pool to fit in the Rediscloud free version constraints. [1]

https://devcenter.heroku.com/articles/concurrency-and-databa...

[0]: You'll need to download JCE 6 Extensions to get Puma, JRuby 1.7.4 and Rails 4 running in development: http://www.oracle.com/technetwork/java/javase/downloads/jce-...

[1]: Rediscloud free version gives you 10 connections.


Can someone with more webserver knowledge please explain how puma works from a high level? I know very little about web servers so the following may not even make sense!

I have looked at the source, and it appears a thread pool will listen to incoming requests, and pass them to a reactor then move on to handle more requests. Another thread polls the sockets and writes the data to the response stream when ready. [Note: this all may be completely wrong!]

If the way the server works as above is correct, does it mean it's possible to achieve event-loop-based levels of concurrent connections along with good old CPU concurrency as well?


I'm one of the authors behind Phusion Passenger (https://www.phusionpassenger.com/), a polyglot web server for Ruby, Python and Node.js.

What you said is basically correct. And yes, it is possible to combine event loops with threads in the way you described to get CPU concurrency as well. Whether it is actually helpful depends on the use case. The Phusion Passenger core is evented (similar to Nginx and Node.js) since version 4.0. We considered a multithreaded-evented architecture as well, but it turned out to be less beneficial than we hoped because the applications themselves use plenty of CPU already, and because we rely on shared in-memory state for proper load balancing between processes, thus having a source of contention. In the end, the single-threaded evented architecture in Phusion Passenger turned out to be more than enough.


How does Passenger deal with blocking I/O from applications if it is single threaded? Does the entire event loop block while doing I/O?


There are two components at work. One is the Phusion Passenger core, which is evented and uses only non-blocking I/O. At no point does the event loop block.

The other component is the Ruby application process. I believe you are talking about this component.

Concurrency on the Ruby application process side is handled using two strategies:

1. Multiprocessing. 1 process per concurrent connection, with a buffering layer. This is architecturally the same as how Unicorn works when behind an Nginx reverse proxy.

2. Multithreading. 1 thread per concurrent connection, possibly with multiple processes. This is architecturally similar to Puma, though not entirely the same. It should be noted that multithreading support is available in the Enterprise version only.


Passenger spawns copies of your application and (I think this is the default now, it wasn't before) puts your requests in a queue, so that as the app instances finish processing requests they snatch a new request off the queue. If they are all busy then the requests back up on the queue until one of them becomes available.


Thanks for the update. I'm likely going to try and deconstruct and rebuild some of it to really get my head around evented architectures -- will try and blog it. Years of threads had me in bliss until this non-blocking renaissance came along :)


I'm looking at the 'Puma' section in Jesse Storimer's book "Working With TCP Sockets" and that seems to be the general idea behind it... uses a thread pool for concurrency, monitors persistent connections with an evented reactor.

I'm not affiliated with the author, but this was such a nice book to get an intro to webservers and their architectures/tradeoffs from.

Really fueled my love of sockets :)


Thanks for reminding me about Jesse's book(s), I knew there was something I was meant to do!


I like puma, but I thought it was interesting that the left passenger out of their performance graphs. Seems odd to omit the most popular ruby application server from the results.


All of the servers listed here are standalone servers, while Passenger is a module for existing servers. I don't really see why you couldn't include Passenger here, but it seems like they're only comparing like to like, which I guess is fair.


You can use Passenger as a standalone server too, it's called Passenger Standalone. You can install it with 'gem install passenger', and run it by running 'passenger start'.


(In which case it installs and fires up a copy of nginx behind the scenes).


Oh, so you can. I actually went and checked the Passenger site to see if my information was current since I mainly use Puma these days, but I didn't scroll down far enough to see the standalone version.

But installing it as a module still seems to be the recommended method, so I can see still see why it isn't included in a list of servers that are meant to be proxied to.


Is passenger the most popular? Is there any reliable statistics available somewhere?



Out of curiosity, why today?

I mean, this is not new at all, there is no major version that was released AFAIK...

This is great to share that though, just curious why it's on HN today? :)

Edit: typo fix.


Agreed. I also recall seeing it on HN before, and not that long ago.


You may have seen a recent comment, looks like the main page was posted a year ago -- I relied on the URL duplicate checker when submitting it (it was new to me today to come across and I wanted to see thoughts from the HN community on it).

I ran a the search and found the previous:

   A modern, concurrent web server for ruby (puma.io)
   16 points by kachhalimbu 1 year ago | 2 comments | cached


Fair enough. Just thought I might have missed something new and big! :)


I'm not a Rubyist, but if I'm reading the section of the README that describes the design [1] correctly, the closest Python equivalent (architecturally speaking) is Waitress [2].

[1] https://github.com/puma/puma#built-for-speed--concurrency

[2] http://docs.pylonsproject.org/projects/waitress/en/latest/de...


Looks good. It would be nice to see it on the Web Framework Benchmarks (https://github.com/TechEmpower/FrameworkBenchmarks).

Edit: someone created an issue asking for contributors: https://github.com/TechEmpower/FrameworkBenchmarks/issues/45...


Any reason why thin is not in the comparison? Is it not very fast compared to the others in the chart?


We (ShopKeep) are using Puma for our thin web-services around our platform. Specifically, we send all of our data to a two nodes load-balanced running Puma for our analytics aggregation. It's pretty amazing how a single instance running Puma replaces an entire cluster of Unicorn workers.

p.s. We are also hiring Ruby, and iOS folks. Contact information is in my profile.


I've been using Puma in dev/QA and it's impressive. I'm planning on trying it out in production but have a lot invested in Unicorn at the moment. For example, I have monit confs that kill workers once they reach a certain memory threshold and I'll have to figure out how to do that with Puma.


I'm one of the authors behind Phusion Passenger (https://www.phusionpassenger.com/), a polyglot web server for Ruby, Python and Node.js.

We've recently written a comprehensive comparison between Puma and Phusion Passenger, which you can read here: https://github.com/phusion/passenger/wiki/Puma-vs-Phusion-Pa... The comparison covers things like concurrency models, I/O models, security, clustering, multi-app support, documentation, memory usage, performance and more. Although the comparison is between Puma and Phusion Passenger, a lot of the points are relevant to a Unicorn-Puma comparison as well.


Please stop making your comparisons sound as if passenger-free was a fully functional application server such as unicorn or puma. It is not.

Passenger-free is a deliberately crippled demo-version. If someone wants to use passenger in production they will have to pay $50 USD per year, per server for it. And you know that very well.

The 'free' passenger drops requests and serves 502 errors to the users during every single deploy.

That's equivalent to popping up shareware nag-screens into my face at random intervals. Except phusion passenger doesn't nag me, it nags my customers.

You call the avoidance of these errors a 'feature' ('rolling restarts') that must be paid for.

For some reason your helpful comparisons never mention that both unicorn[1] and puma[2] ship with this basic functionality out of the box. One could almost think it is part of your sales strategy to have people notice this little 'limitation' only after they already deployed your product to production...

[1] http://www.justinappears.com/blog/2-no-downtime-deploys-with...

[2] http://blog.nicolai86.eu/posts/2013-02-06/phased-restarts-us...


I think thats a little harsh.

I've been using Passenger since it was called Apache mod_rails http://www.concept47.com/austin_web_developer_blog/ruby-on-r... you have to remember that before the Phusion guys showed up, there was no easy way to run a rails server without proxying requests to mongrel, thin or something like that. instances could bloat in memory or die and your app could, for whatever reason, go down and you wouldn't know it. Did I mention that rails apps were also pretty slow back then?

They made deploying and maintaining rails an order of magnitude easier with Passenger, first with the mindblowingly simple installation process and later by introducing ideas like killing and spinning up new app instances after xxx requests, or spinning instances up or down depending on activity. All this with simple configuration switches out the box. Even the advances in Passenger 4 are pretty amazing (threaded mode, and out-of-band garbage collection, come to mind)

What I'm saying is, they've done a lot for the Rails ecosystem and while I'm not a fan of their new pricing strategy with Passenger or their marketing tack with Puma posts on this thread, I think its worthwhile to be a bit more gentle in chiding them because of their peerless contributions in the past ... plus their link is actually a very well written comparison


Yes, phusion was a viable contender for a brief timeframe. That was 5 years ago, in 2008, until unicorn came around in 2009.

I don't see how their standing in 2008 excuses them to use shady trojan horse sales tactics to push their rather mediocre product in 2013.


Unicorn improved upon Mongrel in many ways, and it is interesting technology. We know, because we are Unicorn contributors ourselves (take a look at the AUTHORS file). However Phusion Passenger did not stay behind. Since the introduction of Unicorn, we introduced Phusion Passenger Standalone, which allows you to use Phusion Passenger in a Unicorn-like manner (e.g. you can attach it to an Nginx reverse proxy, and it uses a Unicorn-like architecture). Since then, we've even improved upon Unicorn. For example, the out-of-band garbage collection that Unicorn has? We've made it better with out-of-band work. Administration tools for querying the status of workers? passenger-status and passenger-memory-stats do it better than Unicorn. Python support? Not in Unicorn. And so on.

Phusion Passenger is used to full satisfaction by many large parties, such as Motorola, UPS, Hitachi, etc. So if you can point out technical reasons why you think our product is "mediocre", please feel free to tell us, and we'll fix them.


Most apps where you would be concerned with downtime have load balancers in front of them. This isn't burning fire you make out because you can pull one app server out of the pool, restart and then put it back in once it is restarted. We do this by using keepalived and renaming a chk.txt file in the root.


Moe, I don't understand where your hostility comes from. It is fine if you don't like Phusion Passenger, but please don't say things that are untrue.

The open source version of Phusion Passenger is not a crippled version. It is used in production by many large users, including New York Times, AirBnB, etc. Your statement that the open source version is "crippled" goes right against the fact that we've been actively developing the open source version ever since Enterprise came out. In fact, open source development has become more active after Enterprise than before, thanks to Enterprise funds. If you check our commits at Github you will see that the rate of development has accelerated. If you take a look at all the new features in the open source version of Phusion Passenger 4 you will see that it is improving at a tremendous rate.

It is not true that the open source Phusion Passenger drops requests and serves 502 errors to users on every single deploy. Upon touching restart.txt, the open source versions blocks requests and resumes them after having restarting 1 process. The blocking only happens for the first process, not for the ones after. At no point will requests be dropped or will errors be served. We even have integration tests in place to check for this. If you see request drops or errors in the open source version, please contact us and we'll have a better look at the problem. If it's a bug, we'll fix, it's that simple.

"popping up shareware nag-screens at random intervals" is completely false. There are absolutely no nag-screens in Phusion Passenger. The open source version of Phusion Passenger is open source, so if you don't believe me, then dig into the source code and tell me where exactly the nag screens are.

The lack of rolling restarts in the open source version is not a secret. Our documentation clearly mentions the differences between the open source and the Enterprise version. With a few clicks, you can find out what the open source version does and does not have.

And actually, the open source version does not "lack" rolling restarts, it just does not automate it for you. You can implement rolling restarts using Phusion Passenger Standalone (open source version!) in the same way you do with Thin and Mongrel, i.e. by swapping sockets. It works fine and some of our users do exactly this. It is just more work than the automated, polished, error-resistant version that Enterprise offers.

And finally, you Unicorn and Puma "fully-functional" while implying that Phusion Passenger is not. This is not true. There are features in the open source version of Phusion Passenger that Puma and Unicorn does not have. The reverse is also true. It is even true that Unicorn has some features that Puma has and vice versa.


I don't understand where your hostility comes from.

From the dishonest spin that you put on everything you say.

I wrote:

   That's equivalent to popping up
   shareware nag-screens into my face.
You reply with:

   "popping up shareware nag-screens at random intervals"
   is completely false.
   There are absolutely no nag-screens
   in Phusion Passenger.
These little 'misunderstandings' and strawmen add up.

Consequently I won't even comment on your further spin-doctoring here.

I'll just encourage anyone interested in the matter to also read what EngineYard and UserVoice say about passenger;

https://blog.engineyard.com/2012/passenger-vs-unicorn

https://developer.uservoice.com/blog/2012/08/08/the-dark-pas...


I'm pretty sure you literally said "popping up shareware nag-screens at random intervals" instead of "that's equivalen to", but whatever, I'll take your word for it, in good faith, that you did not edit your post and that I read your post wrong.

Like I said before: errors during redeploy are not supposed to happen even in the open source version, and if they do happen then we are very interested in fixing them. Please tell us more. There is no need to spin this as being "dishonest". If you don't believe us, tell us about your problem, let us fix it, and verify for yourself that it is fixed. Only the facts matter.

As for the Engine Yard and Uservoice.com post:

1. All problems that the Engine Yard post talked about, were fixed 2 days after they published that article (http://blog.phusion.nl/2012/09/21/the-right-way-to-deal-with...). If you are still experiencing those problems, tell us, and we'll fix it. By the way, did you notice that EngineYard said something entirely different about Phusion Passenger and Unicorn later on? https://www.engineyard.com/articles/rails-server

2. Almost all problems that that the Uservoice.com talked about, have been fixed in Phusion Passenger 4.0, open source version.

3. There are also plenty of posts that describe migrating from Unicorn to Phusion Passenger, e.g. https://speakerdeck.com/arnvald/dot-dot-dot-but-we-had-to-ki....

So what does that mean? Experiences differ, for all sorts of reasons, some legit and some not (e.g. inefficient configuration). Neither kinds of posts automatically mean that Unicorn/Phusion Passenger is bad: there will always be people having problems with a particular technology. That's why we encourage people who have problems to contact our support forum.


I'm pretty sure you literally said "popping up shareware nag-screens at random intervals" instead of "that's equivalen to", but whatever, I'll take your word for it, in good faith, that you did not edit your post and that I read your post wrong.

Wait, what...

You edited this line into your reply hours after you initially posted it. Hoping I would miss it or something?

And in this very edit you have the nerve to suggest I edit my comment after the fact to spin shit around?

Look at the position of the 'popping up' phrase in my post. How would saying it in any way other than it's written make any sense, syntactically?

And why would I claim passenger pops up literal nag screens, when everyone knows that's complete nonsense?

I do appreciate you showing your true colors in this thread. I'll make sure to link to it every time I run into one of your 'PR excursions' in the future.


In other news: I've been asking other online communities as well in the hope of finding other users who have experienced the kind of problem you reported (http://www.reddit.com/r/ruby/comments/1lceav/ask_rreddit_doe...). Unfortunately I haven't received a single answer so far. Any feedback from you about the nature of the problem would be greatly appreciated.


hey moe, I sent you an e-mail on the e-mail address listed in your HN profile. Would you care to reply? I'd really like to talk to you about this..


FYI, I've initiated an investigation about your problem for you: https://groups.google.com/d/msg/phusion-passenger/oVGAsITg8s...


You've done a great job posting Passenger links all over this thread.

I find it in poor taste come into a semi-related post and repeatedly spam links to your product over and over...


My first thought upon reading the article was: "I wonder how this compares to Passenger?". I find it very helpful to get an authoritative link from the Passenger guys themselves in this thread.

Since Passenger is by now pretty much the default way to run a Ruby stack I don't think any one here doesn't know who they are and so they really do not need to spam. They are just contributing to the discussion and I appreciate that.


One linking post is fine. Three comments full of Passenger links is pushing good taste.


Is Passenger really the default for most people? I tried it once and honestly prefer just dropping thin or unicorn behind nginx. And with this discussion, I'll probably be trying out puma in the near future.


And you definitely should try Unicorn and Puma. We never said that Unicorn and Puma are bad technologies. In fact, we think they're good technologies. That said, we also think that there are many reasons why one would prefer Phusion Passenger (either open source or Enterprise) over Unicorn or Puma.

One thing that some people don't know about Phusion Passenger is that it is versatile. Phusion Passenger Standalone is a mode, specifically designed to be dropped behind Nginx, just like Thin/Unicorn/Puma. That is a good alternative if you don't like the Apache/Nginx integrated modes.


mwienert, my apologies if it came over as spamming. The reason why I put an introduction in those posts is because last time I posted comments about Phusion Passenger people told me that I should introduce myself as being an author of Phusion Passenger. And as you've noticed in this thread (and not just on HN, see e.g. http://stackoverflow.com/questions/18398626/why-did-gitlab-6...), I actually say some very favorable things about Puma, and I thought that people will appreciate it if they know that it came from someone who writes web servers instead of just some random commenter.

But I see your point, and I'll keep it in mind. If I can edit my other posts I would remove some links but unfortunate HN has already frozen them.


We just moved our ruby apps (puppet, Redmine,gitlab) from apache passenger to puma + nginx because passenger is awful - it's slow, bloated and it eats memory.


Ok sorry I re-read that and I sounded quite loaded, I was actually a bit miffed that someone from passenger posted on this puma thread but I suppose they have the right to.

We didn't have a great experience with passenger, it does seem bloated and we decided to move away from apache to nginx for similar reasons.

Tldr; We've been very happy with our move from passenger to puma.


I'm interested in what sort of problems you had with Phusion Passenger. What makes you say it is slow and bloated? We've even had users who switched away from Puma and Unicorn for the exact same reasons you mentioned.

But in our experience the app server is rarely the cause of such problems. Most people who complained about bloat in Unicorn/Puma/Passenger eventually discovered that it was a problem at the application level after all.

We'd be happy to help you with your issues if you can tell us more.


I can understand Ruby and Python, but why nodejs?


Why not Node.js?


Because nodejs already has a high performance http server and a cluster module.


The point of serving Node through Phusion Passenger is not performance, but supervision, stability, robustness, security, multitenancy, etc. Although Phusion Passenger can increase performance by load balancing requests between multiple nodes. Please see https://github.com/phusion/passenger/wiki/Node.js for reasons to use Phusion Passenger with Node.

The cluster module is great, but it requires you to manage your processes yourself and to write your own load balancer. Phusion Passenger provides all this functionality for you for free, through a C++ core.


Just keep in mind your code (and gem's code) needs to be thread safe.


Less memory usage then Unicorn here, though we recently moved to Passenger enterprise which is really similar: https://www.phusionpassenger.com/enterprise

good writeup on the subject here: https://github.com/phusion/passenger/wiki/Puma-vs-Phusion-Pa...


I've been enjoying using Puma in clustered mode for some production sites, but falling back to Thin for Server Sent Events (new EventSource()) - does anyone know if this is ever likely to come to Puma, or is there a fundamental reason that the Puma process model can't support SSE?


Tenderlove has a blog post about SSEs using Puma. http://tenderlovemaking.com/2012/07/30/is-it-live.html


Are you doing so with MRI/Ruby 2.0? Been considering Puma over Unicorn since it works so well for one of our JRuby apps but was wondering about how it would perform on MRI since it seems like it was never really designed with MRI in mind.


When I tried to benchmark a standardized app under MRI, I found that Puma works great _if_ your app is io-bound as opposed to cpu-bound.

This makes sense because multi-threading is still a win under MRI even with the GIL, only so long as your app is io-bound (so threads can be switched out when waiting on io, for instance waiting on a db query).

Most web apps I've worked with tend to be io-bound.

https://github.com/jrochkind/fake_work_app/blob/master/READM...


Sounds more like a problem with your implementation. I'm pretty sure Puma can handle SSE just fine. Your implementation probably depends on EventMachine which is not baked in like it is with Thin.


I'm doing:

content_type "text/event-stream" stream(:keep_open) { |out| settings.connections << out }

From Sinatra, which I assume is then deferring to EventMachine, courtesy of Thin. I'll see if there's a way of forcing EM directly in to the Stream setup...thanks for the pointer.


I'm one of the authors behind Phusion Passenger (https://www.phusionpassenger.com/), a polyglot web server for Ruby, Python and Node.js.

We recently wrote a demo demonstrating SSE on Phusion Passenger (https://github.com/phusion/passenger-ruby-server-side-events...). During writing of this demo, we found out that sinatra-contrib's streaming code relies on EventMachine. That means that sinatra-contrib's streaming only works on EventMachine-based servers, like Thin and on Goliath.

Based on my knowledge about Puma, I'm pretty sure SSE works fine on Puma as well. The Phusion Passenger SSE demo uses the Rack socket hijacking API (which we've blogged about: http://blog.phusion.nl/2013/01/23/the-new-rack-socket-hijack...) to implement SSE. This approach should work on Puma as well because it supports the Rack socket hijacking API too.


I created a sample app using Puma and Celluloid to work through similar troubles I was having with Sinatra's streams and Puma. If it helps, it's here: https://github.com/sdeming/cellfun. This does not use EventMachine, depending instead on Celluloid. I work primarily in JRuby and EventMachine has always tripped me up.


Well. I have just come across this one which says Unicorn to be the best performing one. Apparently, Puma proves wrong even with multiple workers.

https://gist.github.com/pbyrne/5218411


Interesting goes against directly what they claim on their own page. Its tested on a four core machine. I'm suspecting the 16 default threads were the problem, since Unicorn only ran with 4 processes not 16.

http://puma.io/


1 puma worker vs 4 unicorn workers isn't a fair test on a multi core machine under MRI.


Can anyone who's used Puma on Heroku comment on how it compares to Unicorn?


I switched from Unicorn to Puma for my relatively low-load Heroku-based Rails app and have seen nothing but improvements across the board. Less memory usage, lower response times, overall better performance. It's working very well.

Note: I'm an amateur and I know I'm not nearly optimized across many areas -- the app could be better, I'm not using JRuby (where multi-threaded shines, or so I read), so this is from a pretty novice perspective. But from here, Puma has been great.


What did you use for profiling? Their newrelic add-on?


Much better memory usage and (for us at least) better concurrency. We could only run 4 unicorn workers on a single dyno. But with Puma we run 16 threads with ease.


Have you tried running 3 or even 4 puma workers (each with 8-16) threads on a dyno? That way you can get more concurrency on CPU-bound requests in addition to IO-bound concurrency (assuming MRI).


I tried that in my attempt at benchmarking a standardized app on heroku, and, yep, it works great:

https://github.com/jrochkind/fake_work_app/blob/master/READM...


Thanks, this is all very relevant at the moment. Trying to optimize page load times for a Heroku Rails app. Web server is one of the bottlenecks.


Puma works great on Heroku. Rubinius, on the other hand, doesn't. It was incredibly slow, the app took more than a minute to boot up which caused Heroku to think it crashed. However, you can still use Puma with MRI 1.9.3 or something else and it'll still work great.


I attempted to do some generic benchmarking of just this question and found Puma way outperformed Unicorn under heavy load if your request processing are largely io-bound.

Didn't get much attention when I tried to post it to HN when I did it a couple months ago:

https://github.com/jrochkind/fake_work_app/blob/master/READM...


Your application is not fair at all. It looks like your sleep mimics I/O boundedness, but in reality it does nothing but set an arbitrary request length. In a real world application I/O requests contend for resources with eachother. Sleep calls don't contend with each other at all.

This means that application servers can just keep stacking concurrent requests with perfect performance. If there was actually I/O being done that took 250ms, then you would find that it was the real bottleneck, and it would be the dominant factor in your benchmark, with the differences between app servers all but disappearing.


That benchmark is fair if you consider the context: I/O bound workoads. On those kinds of workoads, your app spends a large amount of time waiting for I/O. While waiting, it does not use any CPU. This is very well simulated using sleep calls.


It is not clear to me that my application fails at realistically simulating I/O (I agree with FooBarWidget below) (and I'm not sure where the word 'fair' comes from or what it means here, that's your word not mine), but I tried to provide as much code and information about exactly what I tested so you could decide for yourself, so fair enough.

(I agree that with real I/O, there wouldn't be _exactly_ (eg) 50ms of waiting in every request; it would differ from request to request, and might slow down under heavier load. That's true, but I don't think it matters for what I was trying to test -- I was not trying to test how well a given rdbms can handle load, for instance; I was trying to standardize that, to test the app's performance. My simulation assumes an (eg) 50ms _average_ iowait time, by just assigning every request 50ms of wait. Or instead of 50ms, whatever average iowait you wanted to posit and test, and I tested a few. It is indeed a simulation, but I think it captures the significant things for what we wanted to measure).

However! I would very much welcome seeing the results from a test done by you (or anyone else), with an application that simulates (or actually does) I/O in a way that you believe is more realistic and leads to more instructive results. It certainly would be interesting to see if it led to different results than my tests, or not. I suspect it would not (because I think my app is realistically simulating the parts that matter, naturally! but I've been wrong before!)

Feel free to fork my code to do so, if my code is a useful starting point for you. Please do share your results!

Of course, you could also run tests with some actual real world app as well, instead of an app simulating a real world load in a standardized way like I used, and see what those results look like. I'd certainly welcome seeing that too! There's certainly plenty of different sorts of tests that could be done. I'd love it if people started doing some of them and sharing their results. I tried to contribute my best effort to that.


Hmm, it seems though you're right, it seemed odd to me that the difference between puma would be so large at high concurrency, but when I look at it now the I see how the test does correctly mimic I/O for the purpose of the benchmark.

My apologies if I came out too strong.


I'm running Puma on my RaspberryPi for a pet Rails app and it's incredibly fast. Would recommend.


Is it just me beeing blind, or does it not support https?


It supports https through Nginx. HTTPS terminates at Nginx.


I'm pretty sure it supports https


Puma is not for me, but Ragel seems very interesting.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: