Yesterday I came across this excellent article by Pamela Fox – http://blog.pamelafox.org/2013/05/frontend-architectures-server-side-html.html. In it she goes through how her company uses all three stated architectures, discusses why, and how they are used by the end users. It struck me as fascinating because I feel like we are going through this exact same struggle in the XPages community – what is the best architecture to create applications for our users? I also feel fortunate to have experienced all three (to an extent) in the past 18 months and I can empathize/understand Pamela’s perspective.
I am writing this blog article to first highlight the original article, but to also put an XPages spin on it and discuss some of the point raised therein. I have been meaning to write on this subject for a while and Pamela’s article said it better than I could because she has real examples to discuss whereas mine was all hypothetical.
Please read the article before continuing, otherwise the context would be rather lost 🙂
Server-side HTML (“Web 1.0”)
“This architecture suffers the most in terms of usability – it’s very hard for users to do many interactions in a small amount of time – but it does have definite benefits of easy linkability, shareability, and searchability.”
XPages out of the box is a technology built on this paradigm. The tools provided make it quick and easy to create functional applications based entirely on a request-response constant back and forwards to the server. The bulk of the application logic is based on the server and that is very much akin to traditional Lotus Domino development. XPages does at least move Lotus Domino in the right direction, in that we now have partial refresh and the programmatic ability to execute Server-Side logic without having to completely reload the page.
JS widgets
This is something I really want to get into more. If you look at backbonejs or other JavaScript modelling libraries which model data I think you will see that they have a lot of architectural similarity to Lotus Domino. Flat Data is managed in views(notes document) models(notes forms) and collections (notes views) except that the bulk of the logic is performed Client-Side. Backbone is based on the premise that REST is the communication medium to get data and update it on the back end. We have REST services in ExtLib and in R9 Data Services out of the box.
In many senses this is more work than creating a traditional XPages/Notes Domino application because the developer has to code everything by hand – there are no (IBM XPages provided) tools to help develop this architecture.
But the payoff to the end user is significant – the data transfer is as minimal as possible and the transactions with the server are fast – ultimately leading to a faster user interface.
As the application complexity scales, so does the amount of work (and code) necessary to make this happen. This also raises a question about maintainability across an enterprise.
Single-page web apps
With the introduction of the ExtLib Dynamic Content control Single-page web applications are a real possibility within XPages. The user never has to leave the “XPage” to be able to interact with their entire application. With the controlled use of partial refresh the user can move from View Panel, to Document and back again without having to reload the entire page…ever.
This lead to a very consistent, smart, user interface.
The difference between a corporate web application and a “web-site” become more apparent though as the size and complexity of the application increases.
Twitter is a single-page application – but the overall front end user interface features are relatively limited. Maybe 5-10 different screens and most of those are data reading only – the amount of “update” is limited to posting tweets and profile update.
In a corporate application you could have a significant number of “forms” which require update and an equally heavy number of data views. The complexity for keeping this all on one page does not scale well.
Another consideration is the amount of script libraries etc you have to load when the user first accesses the application. More significant if you have to support older IE browsers or XPiNC which cannot handle dynamic script injection via AJAX. If you have 100 “pages” and only one of them uses a specific plugin (org chart for example) you have to load that script library, regardless of whether or not the user is going to access that page or not. That’s kinda nasty.
So what is best for XPages?
I believe that all development decisions should be grounded in the user experience, not on the easy of development. For too long Lotus Notes has succeeded and then failed by allowing people to build functional applications quickly (cheaply) and then having other people mock it for “working” but looking like crap and/or being too “hard” to use.
That said there is a definite bottom-line balance between the $500,000 application overhaul to code everything in the browser and make an amazing application, and the $100,000 application which works great, is maintainable by the in house development team and if 80% awesome on the end user. The builders of amazing Internet websites do not necessarily have the same restrictions and decision makers as “corporate developers“.
So I think the answer is a balance – if you have a large application broken into 5 functional areas, then why not have a widget or single page like application for each functional area?
The point is that there are many factors which go into designing the application architecture from the ground up – and we *ALL* need to understand the *ALL* options so that we can make the right decision for the customer and end users.
Please – discuss 🙂
Mark,
I have tried all three methodologies in the past and we went with JS widgets. We have something similar to Backbone but utilizes a secure JSON structure built on top of the Domino security. It uses XPages only as a Web container. I believe the JS widgets methodology provides the best balance.
Did you ever face performance issues on the client machines when using the JS way? My belly says: “leave the responsibility on the server, not on the client”, so the server-side way is more client independent and enables supporting options in a central way. Just my 2 cents,
Much less so than “in the old days” but performance is still an issue in XPiNC and IE<9. In some cases it is DOM volume which is really the issue and not so much JavaScript processing time.
good point though.
Not really. But you have to carefully plan out the DOM. If you are using Dojo, do not declaratively instantiate the widgets. It is faster to programmatically instantiate them. Also, if possible I try not use Dojo Dijits, they are heavy and that is one of the reasons I went and started creating my own widgets using Bootstrap. Almost they look better. More of the heavily lifting is done by CSS. Also use asynchronous callbacks as much as possible to reduce the lag effect. Another thing that I do is to have the widget already part of the DOM rather than use the template methodology used with Dijits. As a result, I do not have to deal with issues that partial refreshes seem to create for some people.
Excellent article and summary!
The point is all about usability vs. server load. Nowerdays we do not struggle with poor server power, poor network performance and stuff at all. We struggle with crappy user interfaces. The trick is to keep the balance of “cool” interfaces the user WANT to use and the load we shift to our servers. The term “server” became more to virtualized machines so the performance is still a point on our lists. BUT with today’s technology we are able to consolidate the 2 parts using a frugal solution. Using REST/Ajax, frameworks and platforms where your programs are cached etc.these former issues and problems seem to disappear. As you pointed out XPages give you the ability to benefit from all these achievements of the last years. They combine some kind of RAD with stable server side applications building technologies like JSF and stuff. Lucky people we are 🙂
Actually I believe an overwhelming dearth of options has actually been arguably detrimental to the web development part of the platform more so than beneficial.
Connecting to every technology on the back end has immense power in a corporate environment – but poor usability negates the awesome power in an instant.
Sure, but this is our part – to connect the backend power and possibilities to a frontend the user can handle. I just wanted to figure out the pitfalls using alle the frontend stuff because this may lead you into other pitfalls on the client programs (browser compatibility, older programs).
Great post Marky! We are planning a UI refresh and an architectural review to support it and these are the exact factors we have been mulling over. We primarily rely on server-side HTML in our application but, if possible, I’d like to try the JS widget route for the sole purpose of improving application response. Not really sure what we do can be done that way but it will be fun looking into it.
Russ. My suggestion is to imagine that there are three parts: server data, client, and something that generate the JS widgets for display in the client all independent of each other.
This is a subject I have been wrestling with for a few months.
Am on a Domino web project that has been in production for a couple of years now. A heavily used medical app, where Doctors, Nurses etc. log in for 8hr+ sessions. I am the minority in a ,net team, so they all come from a WPF MVVM stand point. The Domino app serves up\ receives all data (JSON) via REST-ish api and it is an MVVM sort of architecture, except we don’t use any of the frameworks like knockout.js, ember.js, angular.js, etc.,…using one of these would have simplified a whole lot of things client side. The web pages have some basic scaffolding, but web client takes care of all UI building. We use dojo heavily, but as someone above mentioned, we have backed a lot of it out due to memory\ perf issues…I realise some of those issues may have been self induced, but that’s for another thread!.
Pros: server load reduced, it just gives us data or accepts data, only very basic html generation. Other applications use the data from Domino, so building other apps to integrate is relatively straight forward as they can use same api and they can use whole stacks of JS already written (again, had we used ember.js etc, this would have been made loads easier). It’s easy to integrate all the web functionality users have come to expect when visiting other public sites.
Cons: as well as points made in other posts, our JS code is gigantic. Debugging JS code that has callbacks, in callbacks, in callbacks (due to nature of app, many processes have to get kicked off async) can be a real headache. Cross browser hasn’t been too much of an issue. BUT…and this is the bit I still don’t the know answer to: How to prevent duplication of code sever side and client side. We can make app run faster by having logic worked out client side and not going to server, but we have to run that logic server side on submission of data, so now have to keep two code bases in sync. It’s a real pain.