screwupcolor1

Web Services Architecture – When to Use SOAP vs REST

SOAP (Simple Object Access Protocol) and REST (Representation State Transfer) are popular with developers working on system integration based projects. Software architects will design the application from various perspectives and also decides, based on various reasons, which approach to take to expose new API to third party applications. As a software architect, it is good practice to involve your development team lead during system architecture process.
This article, based on my experience, will discuss when to use SOAP or REST web services to expose your API to third party clients. 

Web Services Demystified

Web services are part of the Services Oriented Architecture. Web services are used as the model for process decomposition and assembly. I have been involved in discussion where there were some misconception between web services and web API.
The W3C defines a Web Service generally as:

 

A software system designed to support interoperable machine-to-machine interaction over a network.

 

Web API also known as Server-Side Web API is a programmatic interface to a defined request-response message system, typically expressed in JSON or XML, which is exposed via the web – most commonly by means of an HTTP-based web server. (extracted from Wikipedia)

Based on the above definition, one can insinuate when SOAP should be used instead of REST and vice-versa but it is not as simple as it looks. We can agree that Web Services are not the same as Web API. Accessing an image over the web is not calling a web service but retrieving a web resources using is Universal Resource Identifier. HTML has a well-defined standard approach to serving resources to clients and does not require the use of web service in order to fulfill their request.

 

Why Use REST over SOAP

Developers are passionate people. Let’s briefly analyze some of the reasons they mentioned when considering REST over SOAP:

 

REST is easier than SOAP

I’m not sure what developers refer to when they argue that REST is easier than SOAP. Based on my experience, depending on the requirement, developing REST services can quickly become very complex just as any other SOA projects. What is your service abstracting from the client? What is the level of security required? Is your service a long running asynchronous process? And many other requirements will increase the level of complexity. Testability: apparently it easier to test RESTFul web services than their SOAP counter parts. This is only partially true; for simple REST services, developers only have to point their browser to the service endpoints and a result would be returned in the response. But what happens once you need to add the HTTP headers and passing of tokens, parameters validation… This is still testable but chances are you will require a plugin for your browser in order to test those features. If a plugin is required then the ease of testing is exactly the same as using SOAPUI for testing SOAP based services.

 

RESTFul Web Services serves JSON that is faster to parse than XML

This so called “benefit” is related to consuming web services in a browser. RESTFul web services can also serve XML and any MIME type that you desire. This article is not focused on discussing JSON vs XML; and I wouldn’t write any separate article on the topic. JSON relates to JavaScript and as JS is very closed to the web, as in providing interaction on the web with HTML and CSS, most developers automatically assumes that it also linked to interacting with RESTFul web services. If you didn’t know before, I’m sure that you can guess that RESTFul web services are language agnostic.
Regarding the speed in processing the XML markup as opposed to JSON, a performance test conducted by David Lead, Lead Engineer at MarkLogic Inc, find out to be a myth.

 

REST is built for the Web

Well this is true according to Roy Fielding dissertation; after all he is credited with the creation of REST style architecture. REST, unlike SOAP, uses the underlying technology for transport and communication between clients and servers. The architecture style is optimized for the modern web architecture. The web has outgrown is initial requirements and this can be seen through HTML5 and web sockets standardization. The web has become a platform on its own right, maybe WebOS. Some applications will require server-side state saving such as financial applications to e-commerce.

 

Caching

When using REST over HTTP, it will utilize the features available in HTTP such as caching, security in terms of TLS and authentication. Architects know that dynamic resources should not be cached. Let’s discuss this with an example; we have a RESTFul web service to serve us some stock quotes when provided with a stock ticker. Stock quotes changes per milliseconds, if we make a request for BARC (Barclays Bank), there is a chance that the quote that we have receive a minute ago would be different in two minutes. This shows that we cannot always use the caching features implemented in the protocol. HTTP Caching be useful in client requests of static content but if the caching feature of HTTP is not enough for your requirements, then you should also evaluate SOAP as you will be building your own cache either way not relying on the protocol.

 

HTTP Verb Binding

HTTP verb binding is supposedly a feature worth discussing when comparing REST vs SOAP. Much of public facing API referred to as RESTFul are more REST-like and do not implement all HTTP verb in the manner they are supposed to. For example; when creating new resources, most developers use POST instead of PUT. Even deleting resources are sent through POST request instead of DELETE.
SOAP also defines a binding to the HTTP protocol. When binding to HTTP, all SOAP requests are sent through POST request.

 

Security

Security is never mentioned when discussing the benefits of REST over SOAP. Two simples security is provided on the HTTP protocol layer such as basic authentication and communication encryption through TLS. SOAP security is well standardized through WS-SECURITY. HTTP is not secured, as seen in the news all the time, therefore web services relying on the protocol needs to implement their own rigorous security. Security goes beyond simple authentication and confidentiality, and also includes authorization and integrity. When it comes to ease of implementation, I believe that SOAP is that at the forefront.

 

Conclusion

This was meant to be a short blog post but it seems we got to passionate about the subject.
I accept that there are many other factors to consider when choosing SOAP vs REST but I will over simplify it here. For machine-to-machine communications such as business processing with BPEL, transaction security and integrity, I suggest using SOAP. SOAP binding to HTTP is possible and XML parsing is not noticeably slower than JSON on the browser. For building public facing API, REST is not the undisputed champion. Consider the actual application requirements and evaluate the benefits. People would say that REST protocol agnostic and work on anything that has URI is beside the point. According to its creator, REST was conceived for the evolution of the web. Most so-called RESTFul web services available on the internet are more truly REST-like as they do not follow the principle of the architectural style. One good thing about working with REST is that application do not need a service contract a la SOAP (WSDL). WADL was never standardized and I do not believe that developers would implement it. I remember looking for Twitter WADL to integrate it.
I will leave you to make your own conclusion. There is so much I can write in a blog post. Feel free to leave any comments to keep the discussion going.

angularjs1

Liferay and AngularJS Made Simple: Connecting AngularJS to a Backend with REST and JSON

Introduction

Liferay is the leading Open Source Enterprise Portal. One may asked what an enterprise portal is and this question is very valid as it has been asked on every single Liferay project that I have worked. This blog post is not about defining what an enterprise portal is but it wouldn’t be a crime if we provided a brief definition:

An enterprise portal is a web application which provides services required by an enterprise such as: user management, authentication and authorisation services, ability to connect to third party applications and provide a single point of access to multiple applications, hence the “portal”.

The above is my own definition and it could be extended to encompass web content management, content management system (CMS) and single sign on (SSO). This post is about Liferay and the use of its web content management system (WCMS) to create single page applications using AngularJS. The motivation to create portlets using AngularJS instead of Java is as:

  • Portlet development using Java is very expensive
  • Not many Java developers with portlet experience
  • Java portlets development requires heavy duty tools such as build tools, IDE and JVM
  • Portlet developers need to be familiar with the Portlet API, lifecycle and framework

We will focus on Liferay available RESTful web services API but do remember that you can create your own custom web services using Liferay service builder SDK.

Liferay RESTFul API and Security

Liferay ReSTFul and SOAP API implement the same security as the core library:

  • API can be secured so that only authenticated users can access them (AUTHENTICATION)
  • API can be secured so that only users with the right roles can executed certain API calls (AUTHORIZATION)

When creating your own custom API, Liferay Services Builder will create the necessary permission for the web services API.
For a list of API available in Liferay, point your browser to the following

http://<your-server-address>:<your-server-port>/api/jsonws

Liferay will provide a means of testing the services calls when the above URL is loaded. Most services execution will require authentication or a secured token to be passed on with the calls. This level of a security is required in an enterprise environment. It is possible to stop Liferay from checking for the secured token in portal-ext.properties as

Auth.token.check.enabled=false

Software developments should promote code reuse, therefore by separating the business logic from the portlet code, developers can share the business logic with third party applications.

Why Use AngularJS to Create Web Applications (Not Portlets)?

This is not a tutorial on AngularJS. Developers should use the same approach for developing any AngularJS application to developing Liferay web applications.
AngularJS is a popular JavaScript framework promoting Object Oriented Development (OOD) and Model View Controller (MVC) to the JavaScript community. Java developers are already custom with the methodology through the use of Spring MVC and JSF for front end developments. Developers familiar with Google Web toolkit (GWT) should find themselves in familiar territory. Now to answer the question of why use AngularJS to create web applications on Liferay?
AngularJS is JavaScript and therefore can be executed in the browser without recompilation and redeployment. Liferay Web Content Management System (WCMS) provides an HTML editor and content versioning. Liferay JSONWS API runs on the same server and can be accessed through the JavaScript written in the WCMS. AngularJS modules can be written in a third party editor such Notepad++ and uploaded to Liferay Content Management System (CMS). The Liferay CMS provides a link to the latest version of the file which can be referenced in the HTML/ JavaScript code. By creating the web services in Java through Liferay Services Builder, the java developer can focus on the business logic – including testing. The front end developer can utilise his skills in HTML and JavaScript to develop the user interfaces and any necessary interactions with the backend through the ReSTFul services. There is a clear separation of work and accountability. The learning curve for the Java developers to create the services will be minimal. To preview the live code, the frontend developer only has to save the content (WCMS) and refresh the page to see the latest changes.
Here is a quick example:

 

 <div ng-app="" ng-controller="companiesController">  
   <ul>  
    <li ng-repeat="x in data">{{'title: ' +x.title + ', group Id: ' + x.groupId }}</li>  
   </ul>  
 </div>  
 <script>  
   function companiesController($scope,$http) {  
    $http.post("http://localhost:8080/api/jsonws/assetentry/get-company-entries/company-id/10157/start/0/end/5?p_auth=cbSXanJ2")  
    .success(function(response) {$scope.data = response;});  
   }  
   companiesController.$inject = ['$scope', '$http'];  
 </script><script src="http://ajax.googleapis.com/ajax/libs/angularjs/1.2.15/angular.min.js"></script>  

You can copy the above in a new web content article and see the result displayed on the page. Make sure to change red bold values to your system specific:

  • Company id: for ease of testing, you can retrieve that value from the control panel
  • P_auth: this value can be looked up programmatically as it will change every time the user logs into the portal

Traditionally, the simple code above would require a JavaServer page or JSF application which is slow for rapid prototyping. In software testing, portlets controller are one of the most complex components to test. By creating a clear separation between components, testers and automated tools can test each components individually. The sample code pulls information from Liferay REST web services and displays a list of registered companies on the page.

Conclusion

Liferay has a rich set of features which allows developers to create enterprise components and applications. When working with ReSTFul services, Liferay Web Content Management editor can act as an Integrated Development Environment in the browser. AngularJS is approaching maturity and it is very popular with web developers. Liferay 7 (next release as of writing ) will be introducing single page portlets but this is already possible with AngularJS and ReSTFul web services API. Needless to say that you can use any web browser to create content in Liferay CMS and debug your code in real time using tools such as Firebug.

Architect__neo1

Big Data Architecture Best Practices

The marketing department of software vendors have done a good job making Big Data go mainstream, whatever that means. The promise of we can achieve anything if we make use of Big Data; business insight and beating our competitions to submission. Yet, there is no well-publicised Big Data successful implementation. The question is: why not? Clearly this silver bullet where businesses have seen billions of dollars invested in but no return on investment! Who is to blame? After all, businesses do not have to publicise their internal processes or projects. I have a different view to that and the cause is on the IT department. Most Big Data projects are driven by the technologist not the business there is create lack of understanding in aligning the architecture with the business vision for the future.

The Preliminary Phase

Big Data projects are not different to any other IT projects. All projects spur out of business needs / requirements. This is not The Matrix; we cannot answer questions which have not been asked yet. Before any work begin or discussion around which technology to use, all stakeholders need to have an understanding of:

  • The organisational context
  • The key drivers and elements of the organisation
  • The requirements for architecture work
  • The architecture principles
  • The framework to be used
  • The relationships between management frameworks
  • The enterprise architecture maturity

In the majority of cases, Big Data projects involves knowing the current business technology landscape; in terms of current and future applications and services:

  • Strategies and business plans
  • Business principles, goals, and drivers
  • Major framework currently implemented in the business
  • Governance and legal frameworks
  • IT strategy
  • Pre-existing Architecture Framework, Organisational Model, and Architecture repository

The Big Data Continuum

Big Data projects are not and should never been executed in isolation. The simple fact that Big Data need to feed from other system means there should a channel of communication open across teams. In order to have a successful architecture, I came up with five simple layers/ stacks to Big Data implementation. To the more technically inclined architect, this would seem obvious:

  • Data sources
  • Big Data ETL
  • Data Services API
  • Application
  • User Interface Services
Big Data Protocol Stack

Data Sources

Current and future applications will produce more and more data which will need to be process in order to gain any competitive advantages from them. Data comes in all sorts but we can categorise them into two:

  1. Structured data – usually stored following a predefined formats such as using known and proven database techniques. Not all structured data are stored in database as there are many businesses using flat files such as Microsoft Excel or Tab Delimited files for storing data
  2. Unstructured data – businesses generates great amount of unstructured data such emails, instant messaging, video conferencing, internet, flat files such documents and images, and the list is endless. We call the data “unstructured” as they do not follow a format which will make facilitate a user to query its content.

I have spent a large part of my career working on Enterprise Search technology before even “Big Data” was coined. Understanding where the data is coming from and in what shape is valuable to a successful implementation of a Big Data ETL project. Before a single a line of programming code is written, architects will have to try and normalise the data to common format.

Big Data ETL

This is the part that excites technologists and especially the development teams. There are so many blogs and articles published every day about Big Data tools that this creates confusions among non-tech people. Everybody is excited about processing petabytes of data using the coolest kid on the block: Hadoop and its ecosystem. Before we get carried away, we first need to put some baseline in place:

  • Real-time processing
  • Batch processing
Big Data – Data Consolidation

The purpose of Extract Transform Load projects, regardless of using Hadoop or not, is to consolidate the data into a single view Master Data Management for querying on demand. Hadoop and its ecosystem deals with the ETL aspect of Big Data not the querying part. The tools used will heavily depends of processing need of the project: either Real-time or batch; i.e. Hadoop is a batch processing framework for large volume of data. Once the data has been processed, the Master Data Management system (MDM) can be stored in a data repository such as NoSQL based or RDBMS – this will only depends on the querying requirements.

Data Services API

As most of the limelight goes to the tools for ETL, a very important area is usually overlooked until later almost as a secondary thought. MDM will need to be stored in a repository in order for the information to be retrieve when needed. In a true Service Oriented Architecture spirit, the data repository should be able to expose some interfaces to external third party applications for data retrieval and manipulation. In the past, MDM were mostly created in RDBMS and retrieval and manipulation were carried out through the use of the Structured Query Language. Well this does not have to change but architects should be aware of other forms of database such NoSQL types. The following questions should be asked when choosing a database solution:

  • Is there are standard query language
  • How do we connect to the database; DB drivers or available web services
  • Will the database scale when the data grows
  • What security mechanism are in place for protecting some or whole data

Other questions specific to the project should also be included in the checklist.

Business Applications

So far, we have extracted the data, transformed and loaded it into a Master Data Management system. The normalised data is now exposed through web services (or DB drivers) to be used by third party applications. Business applications are the reason why to undertake Big Data projects in the first place. Some will argue that we should hire Data Scientists (?). According many blogs, Data Scientist roles is to understand the data, explore the data, prototype (new answers to unknown questions) and evaluate their findings. This is interesting as it reminds me the motion picture The Matrix, where the Architect knew the answers to the questions before Neo has even asked them yet and decides which one are relevant or not. Now this is not how businesses are run. It will be extremely valuable if the data scientist may suggest subconsciously (Inception) a new way to do something but most of the time the questions will come from business to be answered by the Data Scientist or whoever knows the data. The business applications will be the answer to those questions.

User Interfaces Services

User interfaces are the make or break of the project; a badly designed UI will affect adoption regardless of the data behind it, an intuitive design will increase adoption and maybe user will start questioning the quality of the data. Users will access the data differently; mobile, TV and web as an example. Users will usually focus on a certain aspect of the data and therefore they will require the data to be presented in a customised way. Some other users will want the data to be available through their current dashboard and match their current look and feel. As always, security will also be a concern. Enterprise portal have been around for a long time and they are usually used for data integration projects. Nevertheless, standards such as Web Services for Remote Portlets (WSRP) make it possible for User Interfaces to be served through Web Service calls.

Conclusion

This article show the importance of architecting a Big Data project before embarking on the project. The project needs to be in line with the business vision and have a good understanding of the current and future technology landscape. The data needs to bring value to the business and therefore business needs to be involved from the outset. Understanding how the data will be used is key to its success and taking a service oriented architecture approach will ensure that the data can serve many business needs.