Phoenix - Elixir web framework

On Erlang VM, why don’t you use Erlang for programming web-applications and need Elixir? This question is repeated often in Erlang Circles. The short answer given is that you can’t create something akin to Phoenix/Ecto with pure Erlang. Elixir can be described as 80% of Erlang and 20% of important stuff that drastically improved usability of the language, especially when writing web-applications. Jose Valim created Elixir to combine the performance of Erlang VM with the productivity of Ruby; he aim was to create a Rails-like framework on concurrent and fault-tolerant Erlang VM. The Erlang team deliberately kept the language primitives to the minimum to keep the language simple, as they were programming for a telephone switch/ hardware .It is no wonder that more than 50% of world’s telephone networks run on Erlang. Its creators did not want to introduce features popular in other languages unless they found it useful for solving the problems in telecom domain. It is claimed that even Joe Armstrong, chief creator of Erlang once said that Erlang is not suited for web-development. Actually, the core syntax of Elixir is very small, and everything is built around gluing this small syntax with macros. Mostly any stuff which is ad-hoc in another languages, is a macro in Elixir, e.g. if/else, case, defmodule, def, etc. Actually, macros are the most important feature in Elixir which made possible all the DSL stuff in Phoenix and Ecto. Features like data polymorphism, productivity tools and the Macro facilities discussed earlier in Elixir go a long way in simplifying programming, especially web programming, though Elixir appears to be more verbose when compared to the terse, concise syntax of Erlang. An Erlang code is very similar to a normal mathematical notation. At the cost of some verbosity Elixir has given Erlang users interfaces/polymorphism, tools that enhance productivity and above all Lisp- like macros that enables the authors of the language to keep the core to the minimum and add other functionalities and s macros and the library writers ablity to extend the language to suit their needs.

Why Phoenix?  

Like Jose Valim, Chris McCord who created Phoenix, is also from Rails shop. The history behind the genesis of Phoenix is as under: Chirs McCord was asked by his employer to use EventMachine,  a concurrency library in Ruby  similar to Node.js which uses EventLoop model, to create a Chat application/ soft real time app. His experience was that sometimes the thread was crashing without any notification. So he decided to create Channel/ web-socket implementation in Elixir to handle the challenges of real-time web, as Elixir has the syntax and tools like Ruby but runs on fault-tolerant and distributed Erlang VM. He created it as part of a framework in the well-tested MVC model that was but built on top of Plug middleware about which we saw earlier and he also drew concepts like Pipeline and Endpoints from other frameworks. Chris McCord explained the intention behind Phoenix as under:  “ I wanted a Web framework that could take on the modern Web’s real-time requirements of many connected devices. I found Elixir and realized it would allow a framework to be both highly productive and have world-class performance. Thus Phoenix was born” . Phoenix has the official support of the Elixir creator and other members from a considerable Elixir community. Valim became a committer to Phoenix and kept his Dynamo MVC  for experimenting ideas. In his presentations and talks Jose Valim promotes Phoenix a lot. He says “The Erlang VM is one of the few runtimes widely deployed in production that was designed for running network services. The Erlang VM provides the foundation that allows Phoenix to be extremely performant, while holding 2 million open connections on a single machine. Elixir adds productive and expressive tooling to this robust runtime”. Comparing Phoenix to Rails and other frameworks he earlier used, Chris McCord, said
“Web frameworks that I had used before gave me the productivity I wanted, but I had to sacrifice performance ...Most languages and frameworks I had experience in scaled very poorly when handling long-running, persistent connections. With Phoenix, we can handle millions of active connections on a single server in a language that is such a joy to use. He also explained that not only server-push applications which require persistent connections( web-socket) but also standard HTML5 or JSON API applications can benefit from microsecond response-time of Phoenix , which directly results in heightened end-user experiences.

Requirements

  1. Elixir
  2.  Nodejs –Version > = 5.0.0 – reason will be explained later.
  3. Hex: Install Hex
    • mix local.hex
  4. Install Phoenix archive (to run phoenix commands)
    • we can download the file phoenix-new latest version directly, save it to the filesystem, and then run: mix archive.install /path/to/local/phoenix_new.ez.
  • PostgreSQL may be installed with ‘postgres’ as username and password and the PostgreSQL installation folder/ “bin” that  contains “psql” its command-line client should be added to the system path- in this sample it will not be used.
  • Plug, Cowboy, and Ecto These are either Elixir or Erlang projects which are part of Phoenix applications by default. We won't need to do anything special to install them. If we let Mix install our dependencies as we create our new application, these will be taken care of for us.

Sample  and Steps:   Create a folder for ElixirApps .In command prompt change to it.

Execute    mix phoenix.new hello

  • mix is a build tool that provides tasks for creating, compiling, running and testing Elixir projects. It also manages your application’s dependencies. It resembles lein in Clojure. Incidentally the first version of mix was delivered by the same people who created Leiningen, the Clojure build tool we saw earlier.
  • phoenix.new is the mix task that creates a new phoenix application. (You can see the list of all mix tasks with mix help).
  • hello is the name of our application.

 
The files and folders will be generated .If you type ”y”  when asked to install dependencies the dependencies will be downloaded. You will be also asked to install coffee-script with –g option. You may not require that now
You will be asked to execute

  mix ecto.create

This is also not necessary for this sample, as ecto is a tool to interact with database and in this sample we are not using database.
You can have a look at the directory structure created.

As this is a web application most of the code relevant for this sample is in web folder. We will see them one by one in the course of this article.
excecute
mix phoenix.server

You will see the info
“Running Blog.Endpoint with cowboy using http on port no 4000”.

Go to your Browser and navigate to
localhost:4000

You will figure-1 shown below:

Figure-1

Some of the terminologies used:

  1. End-point: It is entry point of a request. The Endpoint defines a plug/function pipeline through which the requests are sent. Pipeline as we know takes the result of the first function and passes it as the first argument of next function. . Blog.Endpoint (lib/blog/endpoint.ex) is our pipeline/assembly line. In Blog.Endpoint we define our Plugs ( workers).
  2.  Plug.Conn: It contains all the information we need to know about the request such as the header and request body. A function plug accepts a conn and an optional set of options for the plug to use and returns a conn. In other words Each Plug will do something with Plug.Conn and what it has to do is specified in opts and it returns a modified Plug.Conn.
  3. Router:  In endpoint.ex, we can see plug Hello.Router which is the web/router.ex file. It is a routing DSL which really takes the pain away from compiling a dispatch list using the raw, Erlang term based format that Cowboy expects. In Blog.Router, we specify what routes we serve. If you go down to about line 16 of this file, you can see that for the scope "/", it needs to go through the :browser pipeline. This pipeline is defined on line 4. This Pipeline is another list of Plugs that process your Plug.Conn.

Request-flow
See Figure-2 below.

As mentioned above there are no separate HTTP Request and HTTP response in Phoenix. The plug conn contains  the various fields that web applications need to understand about web requests and responses. Request fields contain information about the inbound request. They’re parsed by the adapter for the web server you’re using. Cowboy is the default web server that Phoenix uses, Initially, a conn comes in almost blank and is filled out progressively by different plugs in the pipeline. For example, the endpoint may parse parameters, and the application developer will set fields primarily in “assigns”. Functions that render set the response fields such as status, change the state, and so on. Plug.Conn also defines many functions that directly manipulate those fields, which makes abstracting the work of doing more complex operations such as managing cookies or sending files straightforward.

Some conventions in coding :  In router we say that the “/” root path will be handled by PageController.index, Index action is asked  to render the index.html page. Phoenix will look for a file called web/views/page_view.ex that is defined a module-name of application.PageView- Hello.PageView. In page_view.ex  there is line  use Hello.Web, :view,

It means to add code from the function view in Hello.Web module (web/web.ex line 42).In the Blog.Web.view function, notice we defined where to find the templates- index.html.
use Phoenix.View, root: "web/templates"

It should be found in web/templates/pages. The file is actually named index.html.eex because it executes Elixir code in the HTML. If you open index.html.eex you won't see the header or footer information. That's because index.html.eex is the main content. The header information is in web/templates/layout/app.html.eex.

The code in the app.html.eex, which provides a place-holder for insertion of main-content is given below:

  <main role="main">                                                                                                                                                                            
        <%= render @view_module, @view_template, assigns %>
      </main>

Controller

The code generated for the controller is given below:

Web/controllers/page_controller.ex

defmodule Hello.PageController do
use Hello.Web, :controller  #-------1)

  def index(conn, _params) do   #-----2)
render conn, "index.html"  #-------3)
end
end

  • Here the code says the controller function defined in Blog.Web module(in web.ex) should be used. You can go through the file and see that it takes care of necessary imports.
  • Index takes takes two arguments.Conn and Params.The “_” before params indicates no params are passed in this example.
  • 3) The template is passed to render.The control will go to view before the page is rendered.

Phoenix views have two main jobs. First, they render templates (this includes layouts). The core function involved in rendering is defined in Phoenix itself in the Phoenix.View  module. Views also provide functions which take raw data and make it easier for templates to consume. It is similar to decorators or the facade pattern. As mentioned above, Phoenix assumes a strong naming convention from controllers to views to the templates they render. The PageController requires a PageView to render templates in the web/templates/page directory. Templates are files into which we pass data to form complete HTTP responses. For a web application these responses would typically be full HTML documents.

The majority of the code in template files is often markup, but there will also be sections of Elixir code for Phoenix to compile and evaluate. The fact that Phoenix templates are pre-compiled makes them extremely fast. EEx is the default template system in Phoenix, and it is quite similar to ERB in Ruby. It is actually part of Elixir itself, and Phoenix uses EEx templates to create files while generating a new application. As we learned earlier by default, templates live in the web/templates directory, organized into directories named after a view. Each directory has its own view module to render the templates in it.

The generated view file is given below:

Web/views/page_view.ex

defmodule Hello.PageView do
use Hello.Web, :view   #--------1)
end

1) View function in Hello.Web module is used and it takes care of necessary imports.
The code in the template file which displays the welcome message is given below:

Extract from index.html.eex

<h2><%= gettext "Welcome to %{name}", name: "Phoenix!" %></h2>

The layout file app.html.eex is also involved

<main role="main">
<%= render @view_module, @view_template, assigns %>
</main>

The above code is the place-holder for the main-content (in this case index.html.eex). To add new functionality we may have to do the following:

  • Add a new route to router.ex
  • Add appropriate   controller, view and templates, as saw that route will match incoming request to some action in a controller and every controller needs a view to render a template.

Add the following route to route.ex to add a new route.

get "/hello", HelloController, :world

You can add the following code to HelloController for the “world”  action/function

/controllers/hello_controller.ex

defmodule Hello.HelloController do
use Hello.Web, :controller
def world(conn, %{"name" => name}) do  # ---1)
render conn, "world.html", name: name
end
end

  • To the function world apart from “conn” object, a map is used to pass options for the plug to use  to modify the “conn”,  before sending the same  to the view.

/view/hello_view.ex

defmodule Hello.HelloView do
use Hello.Web, :view #   -----1)
end

  • View function defined in web.ex is used.

Templates/hello/world.html.eex

<h1>Hello <%= String.capitalize @name %>!</h1> #  ---1)


1) The <%= %> brackets surround the code we want to substitute into the rendered page. @name will have the value of the :name option that we passed to render.

Start the server and navigate in the browser to 
Localhost:4000/hello/ganesh
and you can see Figure-2

Conclusion

One cannot claim that the ideas used in Phoenix new. But the success of Phoenix lies in how its creator has amalgamated the best ideas from many places and designed it. Its meta-programming capabilities reminds us of Lisp/Clojure  and domain specific languages (DSLs) of Ruby. The method of composing services with a series of functional transformations is reminiscent of Clojure’s Ring. Phoenix achieves such throughput and reliability by climbing onto the shoulders of Erlang and Cowboy. Similarly, channels and reactive-friendly APIs combine the best features of some of the best JavaScript frameworks and Vertx the Java library we saw earlier.

It’s the combination of so many ideas from so many places that has worked well. In Phoenix web applications are just big functions. It encourages breaking big functions down into smaller ones. Then, it provides a place to explicitly register each smaller function in a way that’s easy to understand and replace. All of these functions are tied together with the Plug library.  Plug library can be considered as a specification for building applications that connect to the web. Each plug consumes and produces a common data structure called Plug.Conn. Each plug can transform the conn in some small way until a response is eventually sent back to the user. Responses are just transformations on the connection. One might be tempted to think that a request is a plug function call, and a response is the return value. That’s not the case. A response is just one more action on the connection. In future articles, we will see how a CRUD web application can be created using Phoenix and Ecto- the database wrapper in Elixir- and how its web socket implementation “Channel” can be used to create a soft real-time/Chat application.








}