Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I keep seeing almost this exact comment from different people when phoenix is mentioned. It is quite a learning curve I might try again one day.

What makes this better than say a traditional ruby/rails or django app with maybe some htmx to save doing the JS side of things?



You get a concurrent request processing spread across all of your cpu/vcpu cores out of the box. The fallback controller pattern for error handling is incredible for boilerplate error handling. Worst case latency is very stable. The query builder/data mapper, Ecto, is IMO far better than ActiveRecor being more explicit and prevents N+1s out of the box. Eex is built on compile time linked lists rather than run time string interpolation like the options in Rails and Django.


My opinions on this as a rails dev

> You get a concurrent request processing spread across all of your cpu/vcpu cores out of the box

That's neat, but this doesn't matter until you reach serious scale, as scaling a rails app horizontally by throwing more server instances works for a long time

> The fallback controller pattern for error handling is incredible for boilerplate error handling

Sounds just like controller inheritance in rails

> Worst case latency is very stable

So is rails, worst case latency is generally caused by slow SQL requests or having to render complex documents (which can be offloaded to the background easily)

> The query builder/data mapper, Ecto, is IMO far better than ActiveRecor being more explicit and prevents N+1s out of the box

I don't have a problem with ActiveRecord, and while N+1s are easy to create, there are a ton of tools to help prevent these in rails. Can be a hinderance for junior devs or devs without rails experience though

Once things get complex, you're gonna be writing SQL directly anyway

> Eex is built on compile time linked lists

Cool but sounds irrelevant for 99.9% of cases, string interpolation isn't what causes rails apps to be slow


I'd say those points deserve a deeper look. Take concurrency for example. It is not only about scaling, it can actually affect every step from development to production:

1. Development is faster if your compiler (or code loader), tasks, and everything else is using all cores by default.

2. You get concurrent testing out-of-the-box that can multiplex on both CPU and IO resources (important given how frequently folks complain about slow suites).

3. The ability to leverage concurrency in production often means less operational complexity. For example, you say you can offload complex documents rendering to a background tool. In Elixir this isn't necessarily a concern because there are no worries about "blocking the main thread". If you compare Phoenix Channels with Action Cable, in Action Cable you must avoid blocking the channel, so incoming messages are pushed to background workers which then pick it up and broadcast. This adds indirection and operational complexity. In Phoenix you just do the work from the channel. Even if you have a small app, it is fewer pieces and less to keep in your head.

At the end of the day, you may still think the bullet points from the previous reply are not sufficient, but I think they are worth digging a bit deeper (although I'm obviously biased). :)


1. makes sense, our app is limited locally because docker on mac is not fast

2. Our test suite is pretty good! It's limited by the longest test runs (basically selenium tests that are slow because browser interactions are slow), circleCI allows pretty easy parallel testing

3. I think the same reason we offload long-running rails processes still applies . The only thing a long running rails request blocks is further requests to that particular thread handling the long request. Usually some other request will end up getting queued to that thread which is the issue. So it's a load balancing issue, as well as a UX issue (you don't want an HTTP request to take 3 mins loading a long report). Unless Elixir can indefinitely spin up new threads, this is still a load balancing issue, determining which server incoming requests are routed to

We don't use action cable so I can't comment there


2. You should be running multiple Selenium instances (or equivalent) in parallel even on your machine (unless you run out of memory or CPUs).

3. Exactly. This is not a problem in Elixir. If it takes 3 minutes to render a request, all other incoming requests will progress and multiplex accordingly across multiple CPUs and IO threads. This also has a direct impact on the latency point brought up earlier.


Let me quickly address these to the best of my ability, knowing Jose's answers are probably better :)

> That's neat, but this doesn't matter until you reach serious scale, as scaling a rails app horizontally by throwing more server instances works for a long time

You can do that, but its cheaper to get more out of each cpu and Elixir/BEAM give you that for free with a similarly flexible dynamic language.

> Sounds just like controller inheritance in rails

Not exactly, it works on the basis of pattern matching and the Fallback functions are included into the plug (think Rack) pipeline. This makes it faster and you don't have the problems of inherited methods stepping on each other. You also get to match on really specific shapes and cases to handle really granular errors without much effort or cognitive overhead, and you don't need to do things like catching errors like people often do in Rails controller error handling with rescue_from.

> So is rails, worst case latency is generally caused by slow SQL requests or having to render complex documents (which can be offloaded to the background easily)

You elixir application will often be doing things like background work and managing a key value store. You can do all of this and saturate the cpu without latency exploding. The scheduler in the BEAM will de-schedule long running processes and put them in the back of the run queue. Again, you get this for free.

> I don't have a problem with ActiveRecord, and while N+1s are easy to create, there are a ton of tools to help prevent these in rails. Can be a hinderance for junior devs or devs without rails experience though

That's all well and good but it's a nice feature in Ecto. Ecto also hews closer to SQL, and you can compose reusable pieces of queries in a way that is far more manageable than anything ActiveRecord scopes offer. We (where I work) write anything short of complex CTEs in Ecto's DSL, a lot of stuff I'd never try to do with ActiveRecord. It's just a lot closer to SQL and gets some nice compile time assurances.

> Cool but sounds irrelevant for 99.9% of cases, string interpolation isn't what causes rails apps to be slow

Rendering collections of nested partials in Rails has always been slow and eats memory. This isn't an issue with EEX. They also render faster locally.


BEAM (ErlangVM).

This video explains it better than I can: https://www.youtube.com/watch?v=JvBT4XBdoUE


I knew which video this was before I clicked on it. It's a brilliant demo of the power of the BEAM and how it can be leveraged for web applications.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: