Web application with a single entry point

Posted on 01.01.2015 by Kim N. Lesmer.

Whether developing web applications or web APIs you will without a doubt run into The Do's and Don'ts of the Web Sheep Packs

I was reading a bit on the net about so-called RESTful applications and I stumbled upon an article called Writing modern day PHP applications. The author of the article states that he sometimes comes across third party code that he believes exhibit features that "were customary in the 90's".

One such feature is applications not having a single point of entry. The author states:

Back in the days where the rewrite module was an uncommon luxury creating multiple .php files in the webroot was the standard.

Like so many others the author has completely misunderstood the point of the architecture he is working with!

I don't know if it's because of the way things work today, where people have become used to simply "point-and-click or throw out (when something isn't working)", or if it's a lack of thoroughness in studying the underlying technology, but something is horribly wrong with how most things are being developed - especially web applications.

It's like something becomes modern because some popular person says it is the right thing to do, or because someone manages to seduce the masses and everyone follows suit.

Sheep jumping of a cliff

Back "in the days" getting things to work was often difficult. It took a lot work and a lot of patience. But there was a benefit, it provided a thorough understanding of the underlying technology and how to get the best results using it.

I cannot say it any better than Rasmus Lerdorf (the inventor of PHP):

Just make sure you avoid the temptation of creating a single monolithic controller. A web application by its very nature is a series of small discrete requests. If you send all of your requests through a single controller on a single machine you have just defeated this very important architecture. Discreteness gives you scalability and modularity. You can break large problems up into a series of very small and modular solutions and you can deploy these across as many servers as you like.

What Rasmus Lerdorf is describing is in reality based upon the Unix philosophy of "Doing One Thing and Doing It Well".

Even if you're working on a web application that's never going to need to scale up to multiple servers, you're only adding layers of complexity where the application can break (which is something we see every day), and simulating yet another single entry point makes the application noticeable slower.

That is one reason why most frameworks are notoriously inefficient and horrible. The more you abstract away from the core, the less efficient it becomes.

Some people state that it is all about being DRY (Don't Repeat Yourself). That if you have multiple php files (or whatever programming language you use for web developed) handling requests you will have duplicated code.

Having multiple files handling requests has nothing to do with code duplication, it all depends on how the application is designed, and how the separate structures are bundled together, but no application is completely free from some code duplication, even if only minor duplication.

Other people talk about encouraged separation of "business logic" and "layout logic", unified access to the objects for database queries, service calls, etc. But such arguments has nothing to do with a single entry point in the web application what so ever, and such arguments really exists as the result of a lack of experience with scalability. Separation of logic can be handled in numerous ways without using a single entry point.

Another author on another article called "Why does PHP suck?" states:

PHP’s 'drop ‘n run' concept also has a lot of shortcomings. Sure, it’s nice to just drop a script in a folder on the web server and have it run. That is, until you realize that now you have an infinite number of entry points into your application, even though you just need one. If you take a look at the big boys like WordPress, you’ll see that all requests are being redirected to a single entry point and then dispatched further - which is how it should be.

Such a statement is an example of a complete lack of knowledge of the underlying technology which is a real problem today! Most developers don't understand the technology they are working with, they just do what is "most hip".

A single web server running even multiple web applications has in reality got only one single entry point! And looking at how the so-called "big boys like WordPress" are doing things, definitely doesn't help. He states, "all requests are being redirected", which is not true. Requests are being re-written which is something completely different!

Re-writing requests only to have the web application interpret those requests, using yet another layer of complexity, seriously effects the efficiency of that application even on modern hardware, unless of course you're running a simple website with no real traffic. It's just plain stupid to add all those layers of complexity.

Religiously following the "do's and don'ts" of the different "web sheep packs" will without a doubt make you embark on a wild goose chase, trying to perfect something that has become inherently imperfect - or actually just plain and simple wrong.

Always think outside of the box! Do what works and Keep It Simple, Stupid!