Learn how to implement Infinite Scroll Pagination in AngularJS

I bet you have seen and pretty used to the old fashioned pagination that looks something like below:
old-pagination

The pagination style above suffers from a significant UX issues. The user has to guess on what page is the result he is looking for. He clicks and clicks in the hope of finding the right page and finally gets frustrated and leaves your site.

Also, you are asking too many inputs from user to traverse between the pages. Previous, Next, Page Numbers .. turns out though this is something that users have really gotten used to.

As a developer too,  such a pagination scheme is very easy to implement as it loads limited data .

How will you react if I improve user’s  experience by changing multiple pages to a single page, user does not need to click between pages because all the records are being shown in single page now & I guess that makes him happy!

At the same time you don’t need to return entire data set at once which means you can still send same chunk of data records as you were sending earlier, but bonus point here is that you need to send data again which you sent earlier, i.e. if you have already sent records 11-20 and user wants to see those again, request will not come to server, client will handle it internally.

Let’s do the magic

Let’s first discuss our design approach to achieve such functionality where we can show paginated records but in same page. Please refer following diagram:

Design Approach

In above diagram you can refer first column where browser displays only few records i.e. first page as in older pagination approach. We will observe scroll down event by the user and when we find out that user has reached bottom of record list then we will initiate an AJAX request to get next page record, do not replace current page’s record but append to current record list. Now user will have a experience that he is scrolling down same page, but you are sending data in chunks with the help of AJAX request.

I am going to implement this design in AngularJS, but you can implement this as per your framework as well. I will take an example of Courses where application is showing list of courses.

Back-end Service Contract

You need to define service method which can provide you data accordingly. We need 2 method as follows:

  1. getCourseCount() and it will get me total no. of courses exists in system so that application can stop loading more once it reaches count limit.
    url: /v1/courses?count
  2. getCourses(offset:Number, limit:Number) and it return list of records start from offset and upto limit records.
    Url: /v1/courses/offset/:offset/limit/:limit

Directive to Load Records on Scroll Down

Now application needs a directive which you can reuse throughout the application for all the pagination. This directive should be generic enough and should take a callback method as parameter so that each page can call its specific logic to load next chunk of records.


angular.module('app.directives.pvScroll', [])
.directive('pvScrolled', function() {
return function(scope, elm, attr) {
var raw = elm[0];

elm.bind('scroll', function() {
if (raw.scrollTop + raw.offsetHeight >= raw.scrollHeight) {
scope.$apply(attr.pvScrolled);
}
});
};
});

Above directive will bind an event ‘scroll’ to current element and as soon it reaches bottom of current element, it will invoke function passed in as parameter while applying this directive on an element.

Template to Render List

Use above coded directive in your view template so that it bind ‘scroll’ event with your container. Container could be anything either your full browser window or just a fraction of page e.g. a div showing list of records, in this case you need to define a height of container so that whenever scroll reaches that height end it should fire an event and load more records. Apply following CSS to your container.


.courses-list{
position : relative;
width    : 100%;
height   : 500px;
overflow : hidden;
}

Assign above defined CSS class to container and bind pvScrolled directive to it as well as follows:


<div>
<div>
<span>
Showing {{resultList.length}} out of {{total}} records
</span>
</div>

<div>
<div class="courses-list"  pv-scrolled="loadMoreCourses()">
<div ng-class-odd="'course-box'" ng-class-even="'course-box-bgcolor'" ng-repeat="item in resultList">
<div>
<!-- render each course row here -->
</div>
</div>
<!-- This div will be showed whenever ajax request is active to bring more records -->
<div ng-show="loadingResult" class="span9 loading-inline alert alert-info">
</div>
<!-- if server does not has more records to load then following div will inform the user that it is end of records -->
<div ng-show="!loadingResult && (resultList.length == total)" class="span9 pv-message">
<legend>
No more results to display.
</legend>
</div>
<!-- if no records found then following div will be showed -->
<div ng-show="!loadingResult && resultList.length == 0" class="span9 pv-message">
<legend>
No record found!!!
</legend>
</div>

</div>
</div>
</div>

Controller to Load Records on scroll down

In Controller we need to


angular.module('app.search.courses', [
'ui.state',
'app.services.course.courseService'
])
.config(['$stateProvider', function (stateProvider) {
stateProvider
.state('courses', {
url: '/courses',
views: {
'headerView@': {
templateUrl: 'templates/header.tpl.html'
},
'': {
templateUrl: 'courses/templates/courses.tpl.html',
controller: 'CourseCtrl'

},
'footerView@': {
templateUrl: 'templates/footer.tpl.html'
}
}
});
}])
.controller('CourseCtrl', ['$scope', 'CourseService',
function (scope, CourseService) {
scope.pagination = {
noOfPages: 1,
currentPage: 0,
pageSize: 10
};
scope.resultList = [];

scope.loadMoreCourses = function () {
if (scope.loadingResult) {
return;
}
if (scope.pagination.currentPage >= scope.pagination.noOfPages) {
return;
}
scope.pagination.currentPage = scope.pagination.currentPage + 1;
scope.offset = (scope.pagination.currentPage - 1) * scope.pagination.pageSize;
scope.limit = scope.pagination.pageSize;
scope.loadingResult = true;
scope.resultList = CourseService.courses.list({offset:scope.offset, limit:scope.limit});
scope.loadingResult = false;
};

scope.initializeResultList = function () {
CourseService.courses.count({}, function (count) {
scope.total = count;
scope.pagination.noOfPages = Math.ceil(count / scope.pagination.pageSize);
scope.loadMoreCourses();
});
}

scope.initializeResultList();
}]);

In the controller, we have declared a scope.pagination which captures pageSize i.e. limit or chuck size we are loading for every request, noOfPages i.e. total no. of requests required to traverse whole list and last is currentPage i.e. last accessed page so that if new request comes in to load more it will calculate which next chunk required to be loaded.

There are 2 methods defined too, loadMoreCourses() which will be called every time scroll reached bottom of container or when page loaded first time.This method has check that if method has already initiated load more AJAXrequest, it will not initiate next one.

Other is initializeResultList(), which will called one time at the time loading of controller and it will set total no of records and loads first chunk of records.

In html code, we have declared 3 different divs other than result list:

  1. Loading div: it will be shown whenever a AJAX request to load more records is in-progress.
    loadingMore
  2. No More Records to load: it will be shown whenever user has reached end of all the records available in the system. It means application will not load more records.
    noMoreRecords
  3. No Record found: If there is no record to show at all, then it will be show.
    noRecord

Please refer the similar implementation like we discussed above in screen shot of www.proversity.com.

Proversity - All Courses

Conclusion

By now you should be able to use this function in any of your AngularJS application or you can implement this design in your specific framework as well, if you have any additional suggestions or questions regarding this blog, feel free to leave a comment. Would love to share any ideas with you guys.

by Barkat Dhillon

Tapping Big Data for Increased Agility and Competitiveness in Business Operations

Stupendous growth of data volumes in recent years has been behind the rise of the phenomenon of big data, which has presented new challenges as well as new opportunities for business enterprises. Organizations can take advantage of the big data environment with analytic solutions to derive information about key business entities such as customers, products, suppliers, manufacturing processes and more.

Enterprise executives see exciting opportunities in big data. However, it is tempered by the continuing challenge to find solution which meets critical and frequently changing business requirements. This results in increased agility and competitiveness in business operations.

Data warehouse technology facilitates creation of database to capture, integrate, aggregate and store data from a range of sources. The data can be later queried, reported and analyzed for decision-making purposes.

A significant upfront development effort is required to set up the infrastructure for repeatable use of the data. The steps include creating –

  •  a business model of the data
  • logical data definition, called the schema
  • physical database design
  • extract/transform/load (ETL) process
  • business views of data for data delivery

SQL queries can be used on the data warehouse to retrieve information. As SQL is a declarative language, the user need not specify where to find the data, how to integrate it or how to manipulate it. This dramatically reduces the cost of creating an analytic application.

Data visualization turns complex data sets simple diagrams that can make unseen patterns and connections easy to understand. It helps navigate information glut and enables executives to see what they really want.

Enterprises have now recognized the value of representing research data in an innovative way. Data experts use visualization techniques to create charts that can be immensely helpful to marketing managers. Users are more likely to engage with data that has been turned into visual content.

Working with a company that is a part of Amazon AWS partner network , you can be sure of the quality of their data services. They can also be trusted for developing cloud applications. Zero in on a company that has ample technical resources to provide better services to clients.

Free Webinar: Introduction to Cloud Computing

Free Webinar for Cloud Computing Beginners

We are going to do a free webinar on the 15th of April.

In this hour long webinar we walk through the whys, whats and hows of the cloud computing. . We also look at the business drivers behind cloud computing. We discuss the benefits, common use cases, and financial & business benefits. We also do an  elementary demonstration of how to get started Amazon Web services.

Interested ? Click here to Register.

Amazon Activate | Startups Rejoice!

At BluePi we do many projects for startups. These vary from building green-field applications , cloud migrations to performance re-engineering. So when we find a cloud provider focusing on startups we rejoice and so do our clients!

While cloud providers like Microsoft and Vmware continue to grow on the back of some big wins in the Government Sector Amazon continues to be the preferred destination for startups. Well the fact that DropboxPinterest and Instagram are some of the hot startups that started on AWS says something, no?

If you are a startup and are not aware of Activate you are missing out on an amazing infrastructure package. Don’t believe me?

As a startup you get support, credits and training among other things to get started with your amazon journey.

Now here’s the bonus.

AWS Activate now sponsors virtual office hours with AWS Solutions Architects! These experts will be available for consultation about security, architectures, and performance. We have used their help on many occasions being an AWS partner and vouch for the value and quality of their inputs.

So don’t be scared of embracing the cloud – there are enough resources for you to get started. Here’s a video introduction of Activate!

Have questions? Use the Contact Us page to reach out to us.

Source: AWS Blog

 

 

Automate Configuration Management with Chef

In the previous blog we saw how we can automate deloyment of Play framework applications using Chef, in this piece let’s focus on automating Configuration Management for Play Framework applications using Chef.

There are a few steps involved in making the configuration updates automatic, we’ll discuss these steps one by one

1. Externalize your application.conf
There is a configuration file in every Play project under the conf folder named application.conf, when we are in the development mode we make the configurations for database connections,SMTP settings etc, which work well for local development but the same configurations would not work in a dev, staging or a Production enviroment. So the first step to implement is to move the application.conf out of code and deploy it as a seperate artifact. In the previous post we have seen how to run and deploy play applications in a Production kind of environment.

2. Set up a Databag for your environment specific values.
Chef offers a great tool for keeping values specific to each environment separately, called a Databag. Databag is nothing but a file stored on the chef server in json format. The structure of a sample Data bag for your application named myApp is as follows
[json]
{
“id”: “myApp”,
“values”: {
“db_host_name”: “<db_host_name_here>”,
“db_name”: “app_db”,
….

}
}
[/json]
Let’s try and understand the anatomy of the databag. Create a databag with the environment name , lets create databag for “dev” environment in our case, so I create a databag named “dev” with id “myApp” and all my values are nested within the “values” field of this databag.
So in a chef recipe you would access the databag as follows

env_name=node.chef_environment
databag = data_bag_item(env_name, "myApp")

The above line gives me the databag object as a whole , to retrieve the value of db_host I would do

dbHostName = databag['values']['db_host_name']

Now we have a databag created for Dev environment , similarly you can create databags for different environments.

3. Templatize your application.conf
Chef has support for creating Templates and keep placeholders which can be updated with relevant values at runtime. Lets create a template for application.conf now, the template file is always an erb file, so going by the nomenclature we would name our file as application.conf.erb.
You must already be familiar with how the application.conf for a Play Application looks like. Lets see how an application.conf.erb looks like

#Application Configuration File
application.secret="< %=@applicationSecretKey%>"
db.default.profile.driver=scala.slick.driver.MySQLDriver
db.default.url="jdbc:mysql://< %= @dbHostName %>:3306/< %= @dbName %>
.....

The things enclosed within < %=@ %> are all the Placeholders you want to fill at runtime , you should only leave placeholders for values which change with the environments e.g the db hostname is going to be different for Production than Dev, but if you are using mysql for both environments you can simply hardcode the driver name(this is completely your choice as to what all values you want placeholders for).

You must be wondering where these values are going to be populated from, Lets see the Chef code which would fill these placeholders with values from databag. In the chef recipe under your cookbook write a template block as follows

template "#{config_dir}/application.conf" do
source "application.conf.erb"
variables({
:applicationSecretKey => databag['values']['db_host_name'],
:dbHostName => databag['values']['db_host_name'],
:dbName => databag['values']['db_name'],
....
...
})
end

When the above code is executed , it looks for a file named application.conf.erb in your cookbook’s templates/default or templates/<your_os_name> folders, and create a file name application.conf at location specified as #{config_dir}/ , so if you have declared config_dir as /opt/config it will create a new file application.conf under /opt/config/.

The placeholders in the erb file are replaced with the values from the corresponding values in the databag.

4. Externalize your application.conf.erb
The above step works perfectly when you do your deployment the first time , because you simply create a copy of your application.conf and put placeholders instead of hardcoded values and place it in your cookbook, in subsequent deployments you will have to update your application.conf.erb in the cookbook code whenever some additions/deletions/modifications are made to the application.conf file during the development.

This approach is clumsy because the developer is working on the application code and would always require to make changes at 2 different projects i.e the application codebase and codebase for deployment.
To make this more convenient for programmers , simply create an erb file in your application code base and publish it as an artifact which can be downloaded at runtime by the Chef recipe. Lets see how that could be achieved.Lets say you publish the application.conf.erb as an artifact on teamcity, jenkins or any other CI tool that you use.

Add a snippet for downloading the erb file from a url in your recipe as follows

remote_file "#{download_location}/application.conf.erb" do
source "#{databag['values']['jenkins_url']}/application.conf.erb"
mode "0644"
action :create
end

Above snippet downloads application.conf.erb at the defined download_location from url that you have defined in your databag(your download url could be different for different environments).Now we need to make a small change to the template block we wrote above to make it use the template file(.erb) from the download_location instead of picking it from within the cookbook

template "#{config_dir}/application.conf" do
source "#{download_location}/application.conf.erb"
local true
variables({
:applicationSecretKey => databag['values']['db_host_name'],
...
})
end

Adding line which says “local true” , tells the template block to make use of the erb file defined at a certain location on your node.

Now your configurations will automatically get updated with every build once you run this recipe as part of your deployment. The remote_file block downloads the erb file from your CI url at a specific location , Template block looks for an erb file at this location and uses it to create a new application.conf file.

Simply run chef-client on your node and these property file will automatically get updated with modifications made.

sudo chef-client

This way we have freed the programmers from touching the chef code to update the erb file and instead add a step to the development process which mandates the programmer to make an entry in application.conf.erb for every change made to the configurations of your application code.
You can unit test your recipes by using chef-solo, and once done simply upload your cookbook to the chef server.
The approach is not specific to Play framework configurations , you can apply the same recipe for any configuration files used by any framework.

 

Automating Play Applications Deployment using Chef

Extreme Programming goes hand in hand with the Scrum practices that we follow in the modern software development. The promises made by implementing Scrum can only be fulfilled if we follow the development practices that compliment it. Extreme Programming or XP practices are a must if we want a work environment which is dynamic and responsive.Continous Integration remains a practice that is most widely adopted in the world of software development everywhere, even teams that don’t believe in Agile , do believe in an automated system of delivering code to dev,staging or production environments. CI helps build an environment where the customer or users of an application could access the latest developments and provide a prompt feedback.

CI becomes an overhead or somebody’s fulltime job if it is not automated, but luckily with the advent of Chef and Puppet it has become much simpler to automate all the steps in the build and release cycle without knowing too much about Linux and shell programming. What that means is that we don’t need any Linux system administrators to develop a build/release system, we could write code in Ruby to achieve all that without really going into the details of shell scripting.

In this post lets build a recipe to deploy Play applications with Chef.
1. Retrieving Artifacts
Play offers a dist mode for your applications to be deployed in production mode, There are 2 ways we can retrieve the deployable artifacts which are listed as follows
1. Checkout your code from github on the node(where you deploy your application) and run “play dist” at the application root, this creates a zip file with all your library dependencies and a start script to run the application. So if you specified app name as play_app and version 1.0.0-SNAPSHOT in Build.scala file the artifact created would be play_app-1.0.0-SNAPSHOT.zip which contains a lib folder where all the jar dependencies are kept and a start script which runs your Play application in production mode.

2. In this approach we will move step in approach 1 to a CI server which could be either Jenkins, Teamcity, Bamboo or any CI tool of your choice, and in our cookbook we simply download the dist artifact at the time of deployment.

This approach is better suited because you don’t want to load your Prod machine with things it is not supposed to do, so let the compilation of your depoyable artifact happen at the CI server and whenever you intend to deploy the latest version, just download it from that CI server.

Now lets start writing some code that would do that for us, we will define a remote_file block in our recipe which downloads the artifact from the CI server or any URL where u have published your artifact.

remote_file "#{installation_dir}/#{appName}.zip" do
source "#{dist_url}"
mode "0644"
action :create
end

Here the values of installation_dir and dist_url can either be defined as attribute in default.rb under attributes folder in your cookbook or pick the configure the values in a databag. Lets say “/usr/src” is the installation directory and appName is “play_app”, the above snippet would download the zip file from dist_url and place it at /usr/src/play_app.zip.

Next step is to unzip the folder and assign correct permissions to the start script

bash "unzip-#{appName}" do
cwd "/#{installation_dir}"
code < <-EOH
rm -rf #{installation_dir}/#{appName}
unzip #{installation_dir}/#{appName}.zip
chmod +x #{installation_dir}/#{appName}/start
rm #{installation_dir}/#{appName}.zip
EOH
end

The above code snippet runs a bash script at the installation_dir (/usr/src in this case) , first line removes the already existing directory holding your deployable artifacts. Next step simply unzips the file and assigns execution permissions to the start script, finally it removes the zip file that was downloaded.

Now we are ready to launch the Play Application using the “start” script , but before that we will do a few more things, like putting a configuration file in place, creating a logger file and building a service which starts the application in an automated mode in case node reboots.

First we will create a conf file to be used by our application, we need to templatize the application configurations, to achieve this put an application.conf.erb file under templates/default folder in your cookbook.This template file looks like

#Application Configuration File

application.secret="< %=@applicationSecretKey%>"
db.default.url="jdbc:mysql://< %= @dbHostName %>:3306/< %= @dbName %>
....

All keys enclosed with < %=@ %> are going to be replaced by following code snippet when the recipe is run by Chef, with the help of following code in the recipe, config_dir is the directory where you want to keep your configuration file

template "#{config_dir}/application.conf" do
source "application.conf.erb"
variables({
:applicationSecretKey => "#{node[:play_app][:application_secret_key]}",
:applicationLanguage => "#{node[:play_app][:dbHostName]}",
.....
....
})
end 

You can add as many variables as you intend to have in your configuration file to be replaced by values at runtime(use databags for storing environment specific values).

Now the next step is to create a logger file, we’ll create a template file named logger.xml.erb at same location as application.conf.erb and add a template block which fills in the placeholders with the real values at runtime.

template "#{config_dir}/logger.xml" do
source "logger.xml.erb"
variables({
....
:maxHistory => "#{node[:play_app][:max_logging_history]}",
:playloggLevel => "#{node[:play_app][:play_log_level]}",
:applicationLogLevel => "#{node[:play_app][:app_log_level]}"
.....
})
end

Anything that you want configurable like the logging level, location of log file etc can go into the variables part, recommended way to retrieve the values is a databag since you would like to keep different values for separate environments.

Now finally we’ll create a service file to be kept under /etc/init.d on a linux distribution, so that we need not start the application on machine reboots or application deployment. I am not going to put in the details of the service script here, you can look at the code on github for the same. Lets talk about the code that creates this script and supplies various options.

template "/etc/init.d/#{appName}" do
source "initd.erb"
owner "root"
group "root"
mode "0744"
variables({
:name => "#{appName}",
:path => "#{installation_dir}/#{appName}",
:pidFilePath => "#{node[:play_app][:pid_file_path]}",
:options => "-Dconfig.file=#{config_dir}/application.conf -Dpidfile.path=#{node[:play_app][:pid_file_path]} -Dlogger.file=#{config_dir}/logger.xml #{node[:play_app][:vm_options]}",
:command => "start"
})
end

Lets go through this code line by line. First line says create a file named play_app under /etc/init.d from template initd.erb(to be kept under templates/default or templates/<your_linux_distro>). Next line says root is the owner of this file and the permissions are 744 which means owner of the file has execution rights on this script and no one else.
Now we come to the variables part, we’ll see what is the role of each variable in the service

name -> name of the application to be started
path -> directory location where the start script could be found (location where we unzipped the dist file /usr/src/play_app)
pidFilePath -> the location of the pid file, this file contains the current process id of the play application
options -> These options are specified like where is the conf file, the logging configurations , any VM parameters(like Maximum Heap Size etc), you can also specify the port as well using the -Dhttp.port option.
command -> the script to be launched by the service while starting the application, in our case it is the “start” file kept under /usr/src/play_app .

Finally we enable this service by writing the service block in our recipe

service "#{appName}" do
supports :stop => true, :start => true, :restart => true
action [ :enable, :restart ]
end

once done , Chef restarts the service automatically to reflect the changes and updates the pid file with the current process id of this service.
Lets now briefly discuss how this could be run for deploying for the first time and thereafter. If you are using the Amazon EC2 infrastructure simply use the “knife ec2 server create” command with “deploy-play” recipe in the run list and specify environment with -E parameter, this will set up the machine with the required Technology stack and install your application code there.

For subsequent deployments , you need not do more than just “sudo chef-client” from the command line. This chef-client run remembers what all recipes were run on this node the last time and which environment this node belongs too, it will synchronize the cookbooks for any changes made and install all the latest artifacts from your repository.
Tip: If you see an empty run list when you execute “sudo chef-client” , go to /etc/chef folder, and take a look at first-boot.json file, it would contain the run_list that was used to create this node, if it has all the right values execute the following line

sudo chef-client -j /etc/chef/first-boot.json -E <node 's-environment="">

If it doesn’t contain what you are looking for just edit the first-boot.json and add/remove recipes/roles to the run_list and run the above command, this run would update the node on chef-server with the latest run_list and the next time you just need to run “sudo chef-client”.

The code for the cookbook is available on github at https://github.com/BluePi/PlayChef.git, Cookbook can be downloaded from http://community.opscode.com/cookbooks/deploy-play to use in your project. In the next post on Chef we’ll see how we can automate the task of updating configuration files with each deployment.

 

Easy concurrent programming with Akka

Concurrent, Parallel or distributed applications always take a lot of efforts to create . While doing multithreading you must have noticed, how much hurdle it takes to create it. You have to take care of all sort of implementing Runnable interface, serialization in methods, handle dead-lock occurrence and lots other stuffs. And then there is some toolkit like Akka, which gives you a easy and convenient way of doing the same. The Actor Model of Akka raises the abstraction level and provide a better platform to build correct, concurrent, and scalable applications. Now what is actor model? We will come to that slowly, but let’s first have a look on what are the hurdles that takes while doing this concurrent programming conventionally.
Lets take the example of banking, money are withdraw, deposit and transfer from the accounts. Below is the code how we would do that sequentially.

class Account{
    private var balance =0
    
    def deposit(amount: Int): Unit ={
   	 if(amount>0) balance = balance+amount
    }
    
    def withdraw(amount: Int): Int ={
    	val b = balance
    	if(0< amont && amount <=balance ){
   	 	balance = balance-amount
   	 balance 
    	}else{
   	 	throw new Error("insufficient funds")
    	}
      }
}

Now suppose, the balance in the account is 10000, both the man(two different threads) tries to withdraw two different amounts, 5000 and 3000. Now how the program will respond to it? First default it will check the balance of the account. As both of them tries to withdraw it at the same time, both of them will get the balance as 10000, but when they will try to withdraw the amount, the balance by one of them will be 5000 and one will be 7000. Whoever will be ended second will be the resultant balance. Therefore clearly if it happens either the account of holder will go bankrupt or will never go out of money. Then how do we solve it?


The answer is we will use synchronization here to perform the task. Below is the code which uses synchronization in it.

class Account{
    private var balance =0
    
    def deposit(amount: Int): Unit = this.synchronized{
   	 if(amount>0) balance = balance+amount
    }
    
    def withdraw(amount: Int): Int = this.synchronized{
   	 if(0<amount && amount <=balance){
   		 balance = balance-amount
   		 balance   	 
   	 }else throw new Error("Insufficient funds")    
    }
}

Threads take a lock on object level therefore when we do so, we won’t get the previous problem as when one thread is trying to access the withdraw method (which is synchronized) other won’t be able to get access to it.

But what about the case when try to transfer the money, how will we do that, below is the code how will we do that, and will see what sort of problem could arise even after that.

def transfer(from: Account, to: Account, amount: Int) : Unit ={
   from.synchronized{
   to.synchronized{
   from.withdraw(amount)
   to.deposit(amount)
   }
   }
}

For transferring the money what we need to withdraw the money from one account and need to deposit the money into another account. But as one might again access the account in between the withdraw and deposit, both ‘from’ and ‘to’ object need to be in block state. Here we first take the lock on ‘from’ and then on ‘to’, and in between we do our withdraw and deposit and the transfer is done. Now if we take a look closely we will find that, there is a case of dead-lock. Suppose one thread tries to transfer the amount from accountA to accountB, and at the same time another one tries to do the transfer from accountB to accountA, now in this case, the first thread will take the lock on accountA and but it won’t get the lock on accountB as the second thread has already taken it. now both thread will wait for one another to release their locks and ends with dead-lock. In case of java, to solve the problem we would have use the thread pool as the executor framework, which is based on Producer consumer design pattern, where application thread produces task and worker thread consumers or execute those task, So it also suffers with limitation of Producer consumer task like if production speed is substantially higher than consumption than you may run OutOfMemory because of queued task, of course only if your queue is unbounded. But still it has to go through the blocking synchronization which is definitely the cause of dead-lock, bad for CPU utilization, couples the sender and receiver as a synchronous communication.

Now let’s take a look at Akka. Akka’s Actor gives you asynchronous, non-blocking and highly performant event-driven programming model. Therefore we can avoid the dead-lock, better utilization for CPU, and no couple between sender and receiver. We will go through the codings of the banking problem but let’s go through the basics of Actor, actor systems and message passing between sender and receiver and other features of akka. Akka provides scalable real-time transaction processing. Akka is an unified runtime and programming model for, Scale up (Concurrency), Scale out (Remoting), Fault tolerance. The core of Akka, akka-actor, is very small and easily dropped into an existing project to keep asynchronicity and lockless concurrency without hassle.

Actor classes are implemented by extending the Actor class and implementing the receive method. The receive method should define a series of case statements (which has the type PartialFunction[Any, Unit]) that defines which messages your Actor can handle, using standard Scala pattern matching, along with the implementation of how the messages should be processed.
Here is an example:

import akka.actor.Actor
import akka.actor.Props
import akka.event.Logging
class MyActor extends Actor {
val log = Logging(context.system, this)
def receive = {
  case "test" ⇒ log.info("received test")
  case _      ⇒ log.info("received unknown message")
}

To know more about akka please go to http://doc.akka.io/docs/akka/2.1.0/scala/actors.html.

Actors are objects which encapsulate state and behavior, they communicate exclusively by exchanging messages which are placed into the recipient’s mailbox.Like in an economic organization, actors naturally form hierarchies. One actor, which is to oversee a certain function in the program might want to split up its task into smaller, more manageable pieces. For this purpose it starts child actors which it supervises. For more information on Actor system visit http://doc.akka.io/docs/akka/2.1.0/general/actor-systems.html

Actor communicates through messages. The only way to get the actor’s state or behaviour is by messages. Message can be sent to know addresses via ActorRef. Actor knows its own address, which helps to send messages to other actors and can tell them where to reply(Similarly it can send other ActorRef within the message as well). An actor can create a new actor within it, but that doesn’t mean that it can call its methods directly, it has to go through the message passing technique only. While creating a new actor it gets its ActorRef and through that it sends the message to do the task. Actors are like human brains, independently works. The only way they communicate with each other is via messages just like human brains, can not read other minds, but definitely can ask them what’s going on their mind.They do not need any global synchronization for the steps they do, they can run concurrently. It process the messages sequentially, hence it gets the benefits of synchronization and avoid the disadvantages of blocking as the messages are enqueued.

Now let’s solve the banking problem through akka actors. Here we have created a object called Amount and in it, there are case classes which is used for sending messages, carries other useful values for processing. In the below example Deposit carries amount, which changes the state of the actor when it processed.

object Amount{
case class Deposit(amount: BigInt)
case class Withdraw(amount: BigInt)
case class Transfer(from: ActorRef, to: ActorRef, amount: BigInt)
}

Here we have the Account class which is an actor(as it extends Actor), has the main methods of depositing and Withdrawing the money.

class Account extends Actor{
import Amount._
var balanace = BigInt(0) //the balance of the particular account
def receive = {
case Deposit(amount) =>{ // when a message for Deposit will send
balance +=amount //this portion of the code will run.
sender ! “Done”
}
case Withdraw(amount) =>{ // when a message for Withdraw will send
if(balance > amount){ //this portion of the code will run.
balance -=amount //by first checking if the amount is
sender ! “Done” //withdrawable
}else sender ! “Failed”
}
case _ => sender ! “Failed”
}
}

Actors have the reference of the sender from where it got the message to process, and can use the reference to send a message back. However it the sender actor should have the message receiver in itself as well. Here we are sending a successful message after depositing or withdrawing (sender ! “Done”) and sends a fail message if somehow it could not do the operations the message is send for (sender ! “Failed”).

Now let’s come to the money transfer part. We have created a class called TransferAmount for it. I have giving some description in line by as comments so that it would be easy to understand the part.

class TransferAmount extends Actor{		// Creating the TransferAmount as Actor
import Amount._				// importing Amount so that the message
						// objects can be used easily
	def receive = {					// While sending a message to an Actor first 
							// it goes to the receive method executes	
// messages according to the cases
	
		case Transfer(from, to, amount) =&gt;{
			from ! Withdraw(amount)
		context.become(waitForWithdraw(to, amount, from))
}
		}
/****
	The above portion will execute for the message Transfer where the amount to be                                                 
transfered, from whom it need to be withdraw and to whom it need to be deposit <span class="hiddenGrammarError" pre="transferred ">are 
sent</span>. the from and have to reference of ActorRef, so we can send message to Account 
actor through this two. Here from ! Withdraw(amount) sends a  message to the Account 
	actor to withdraw a amount, and then it changes it states through context.become, and 
	wait for the withdraw operation to be performed.
****/

	def waitForWithdraw(to: ActorRef, amount: BigInt, from: ActorRef): Receive ={
		case “Done” =&gt;{
			to ! Deposit(amount)
			context.become(waitForDeposit(from))
		}
		case “Failed” =&gt;{
			from ! “Failed”
			context.stop(self)	
		}
	}
/****
	In the waitForWithdraw method it takes three parameters, two of them as ActorRef i.e. 
from whom to whom the transfered would happen, and the amount that need to be 
transferred. If the Account Actor sends a “Done” message it send a message to the other 
Account to deposit the amount in it, then wait for the deposit operation <span class="hiddenGrammarError" pre="operation ">to 
be</span> finished. 
****/
	def waitForDeposit(from: ActorRef): Receive ={
		case “Done”=&gt;{
			from ! “Done”
		context.stop(self)
}
	}

/****
	When the account sends back the message as Done for depositing money, the 
waitForDeposit method receive it and sends the a Done message to the sender.
****/
}

We can execute or test the above mentioned operations through the Main class mentioned below

class Main extends Actor{
	import Amount._
	val accountA = context.actorOf(Props[Account, "accountA"])
	val accountB = context.actorOf(Props[Account, "accountB"])

	/***
		Here we have created two accounts from and to whom the transaction will be 
happen.
	***/

	accountA ! Deposit(1000)	//depositing an amount of 1000 in the first account

	def receive ={
		case “Done” => transfer(500)
	}

	// When this actor will receive a message of Done it will call the method transfer

	def transfer(amount: BigInt) : Unit ={
		val transfer= context.actorOf(Props[TransferAmount], "transfer")
			//here we have created the Actor TransferAmount and transfer is 
//the ActorRef, using which the message will be sent to it.
		transfer ! Transfer(accountA, accountB, amount)
				//here we are sending the amount and from whom and to whom 
//the amount need to be transferred are sent as arguments
		context.become(
			case “Done” =>{
				sender ! "success" 
			context.stop(self)
			}
		)

	}
}

Unlike multithreading, it is very easy to write unit test cases for Actors . (have a look a see the solutions and opinions about the unit testing of multithreading http://stackoverflow.com/questions/12159/how-should-i-unit-test-threaded-code). Below are

implicit val system = ActorSystem("TestSys")		// Here we are creating a 
//ActorSystem and  keeping it as //implicit
val transAmnt= system.actorOf(Props[TransferAmount])	// Created the Actor
val p = TestProbe()				//TestProbe is a class the akka Testkit
p.send(transAmnt, "Done")			// now we are sending the message to 
//TransferAmount to do the operation
p.expectMsg("success")			//Here are expecting a message call success which 
//would be sent as the result from the actor
system.shutdown()				//and finally shutting down the actor system

There are lots of features in akka which we can use in our applications and can make our application concurrency and fault tolerance precisely. I have just wrote about very basic of it. Unit test case mentioned here is a simple example of it, we can do the testing in a very precise way with the akka testkit.

touch screen mobile phone

Zencoder Cloud Encoding

1. Internet has evolved

Internet has changed! Look at the first website ever created and then look at Youtube. That’s some difference in the last 13 years , no? The difference is in the content and delivery. From a bunch of links the sites nowadays have images, forms and yes, videos!
So that begs the question how does a site like youtube work? How do videos get stored and then delivered over multiple devices.

2. Videos?

Users are using different devices for viewing videos online. Mobile device usage is increasing and the device market is growing more fragmented and complex. For mobile devices high resolution heavy video files are inefficient and unnecessary. The quality of the videos vary from HD video for Desktop to light weight video for mobile devices.

The source video file could be of various types i.e.(.mov, .flv, .mp4, .ogg, .wmv). All types are not supported by all devices. This fragmentation means means you need to decide which devices you want to cater to. Thereafter you need to choose the output video formats. There’s Desktop operating systems like Windows. Linux and Mac and then there are mobile operating systems like Blackberry, Windows Phone, Symbian, Android and of course iOS.

vector-set-of-modern-digital-devices-913-1953

But wait that’s not all. You not only need different output formats and video player support but you also need images!
Every video needs thumbnails which can be used as banner images, typically on of the frames of the video.

The videos may need to be branded by adding the company logo as watermarks on each video, which means either water mark is added while recording of the original video or add watermark while adding to your content repository.

3. Enter Encoding!

bmcldrev
Video encoding is your magic wand and does all of the above and more. Encoding is the process of converting digital video files from one format to another.  All of the videos we watch on our computers, tablets, mobile phones must go through an encoding process to convert the original “source” video to be viewable on these devices.  Why?  Because different devices and browsers support different video formats.  This process can also be called “transcoding” or “video conversion“. Here, we will discuss video encoding through Zencoder. Let me first give a brief idea about features of Zencoder.

3.1- Input File Support

Its platform successfully encodes 99.9% of input formats

  • Support nearly every format and codec (See list)
  • Auto rotation for rotated content (including iPhone video)
  • Manual rotation to either forced horizontal or forced vertical
  • Correction for A/V sync issues
  • High quality deinterlacing
  • Improved support for format edge cases (like Quicktime edit Lists)

3.2- Output File Support

It creates high quality video outputs for every major device.

3.2.1- Formats

MP4, 3GP, OGG, WMV, FLV, WEBM, TS, MP3, Apple HTTP Live Streaming (HLS), Microsoft Smooth Streaming (MSS). All Formats

3.2.2- Codecs

H.264 (Baseline, Main, High), AAC (AAC-LC, HE-AAC, HE-AAC v2), MP3, MPEG4, Theora, VP6, VP8, WMV, Vorbis, WMA. All Codecs

3.2.3- Additional Output Features

Device Profiles

Take advantage of presets that target a specific device or a set of devices

Multiple Outputs

Create multiple versions of a video from a single input file

Captions

Provide closed captioning for the hearing impaired audience

 

Transmuxing

Change the container format of H.264 files while keeping underlying streams intact (used for HTTP Live Streaming).

Advanced Thumbnail Options

  • Generate multiple thumbnails per output
  • Choose thumbnail count, interval, or specific times to get the exact frame distribution
  • Crop, size and pad thumbnail images

4- Technical Implementation

In this section we will learn how to encode video, with different format outputs, generate thumbnails, apply watermark on encoded videos.

4.1- Create a Job

For creating any Zencoder request, you need api-key and input and send a post request to https://app.zencoder.com/api/v2/jobs. Zencoder-Api-Key sent as header information with every request, request can have optional output-settings as well (which we will discuss later). It currently supports downloading files using HTTP/HTTPS, S3, Cloud Files, FTP/FTPS, SFTP, and Aspera.

4.1.1- Input Request Body

{
"input": "s3://bluepi-original.s3.amazonaws.com/sample.mov"
}

You can test this with cURL.

curl --header "Zencoder-Api-Key: 93h630j1dsyshjef620qlkavnmzui3"
--data '{"input":"s3://bluepi-original.s3.amazonaws.com/sample.mov"}'

https://app.zencoder.com/api/v2/jobs

Note for Windows users: Due to a limitation in cURL on Windows you’ll need to escape double quotes like ” and wrap the –data content in double quotes instead of single quotes.

This request will create an encoding job for the account and attempted to download and transcode the file at s3://bluepi-original.s3.amazonaws.com/sample.mov to the default output destination.

Note: you need to set S3 credentials in Zencoder account so that it can download source videos and uploaded output files like Thumbnails and Videos.

4.1.2- Response

When you create a new encoding job through the API, Zencoder server will immediately respond with details about the job and output files being created. You should store the job and outputs IDs to track them through the encoding process.

The data will be returned in the JSON format by default. For an XML response, pass format=xml in the querystring, like https://app.zencoder.com/api/v2/jobs?format=xml.

The above encoding job example would return the following, with a 201 Created status code.

{
"id": "1234",
"outputs": [{
"id": "4321"
}]
}

Note: This does not mean that the job will succeed, only that the API request was valid. The job may still fail because the input file does not exist, the output location is invalid, the file itself is not a valid video or audio file, or other reasons.

4.2- Zencoder Request Body Structure

In this section, we will discuss structure of encoding job request.

4.2.1- Input

This is source video which we want to encode and we have seen a simple example above about it.

4.2.2- Notification

When encoding job completes, you can get that information in different ways.

  • You can check the Zencoder Dashboard for job status.
  • You can check job status via a Job show API request.
  • Zencoder can send a HTTP POST request to your application with the details.

Notification can have following parameters:

Setting

Description

notifications Be notified when a job or output is complete.
url Where you want Zencoder to send POST request to your application
format A format and content type you want notification.
headers if you need any headers for your application then you can add those in your request body under notifications
event The event that triggers a notification. Used for Instant Play.

"notifications": [{
"url": "https://bluepi.in/encode/zencn",
"format": "json",
"headers": {
"videoId": 3
}
}]

4.2.3- Output

In this section, you provide settings about encoding output. In this section we will discuss how we define request to generate Thumbnails and Video encoding.

4.2.3.1- Thumbnail Output

Zencoder traverse input video and generate thumbnail images as per your configuration provided in output section. It could be fixed time interval or you can specify time at which you want to capture screenshot of video. Let’s first discuss few parameters for thumbnail request.

Setting Default Description
thumbnails none Capture thumbnails for a given video.
label none A label to identify each set of thumbnail groups.
format png The format of the thumbnail image.
number none A number of thumbnails, evenly-spaced.
start_at_first_frame false Start generating the thumbnails starting at the first frame.
interval none Take thumbnails at an even interval, in seconds.
interval_in_frames none Take thumbnails at an even interval, in frames.
times none An array of times, in seconds, at which to grab a thumbnail.
aspect_mode preserve How to handle a thumbnail width/height that differs from the aspect ratio of the input file.
size none Thumbnail resolution as WxH.
width none The maximum width of the thumbnail (in pixels).
height none The maximum height of the thumbnail (in pixels).
base_url none A base S3, Cloud Files, GCS, FTP, FTPS, or SFTP directory URL where we’ll place the thumbnails, without a filename.
prefix frame Prefix for thumbnail filenames.
filename frame Interpolated thumbnail filename.
public false Make the output publicly readable on S3.

Let’s create a simple Output configuration where we need to create 2 different sizes of thumbnails, one with actual size of video and other of specific size.


{
"input": "s3://bluepi-original.s3.amazonaws.com/sample.mov",
"notifications": [{
"url": "https://bluepi.in/encode/zencn",
"format": "json",
"headers": {
"videoId": 3
}
}],
"outputs": [{
"thumbnails": [{
"base_url": "s3://bluepi-images.s3.amazonaws.com/videos/",
"label": "regular",
"number": 10,
"prefix": "frame",
"public": "true"
}, {
"base_url": "s3://bluepi-images.s3.amazonaws.com/videos/",
"label": "small",
"number": 10,
"size": "300x200",
"prefix": "thumb",
"public": "true"
}]
}]
}

4.2.3.2- Video Encoding Output

In this section, we will define output configuration which will encode input video and encode video as per output configuration. You can define multiple output files and zencoder will generate video files accordingly, we generally need to generate multiple files for each devices which could vary in aspect ratio, quality and bandwidth. Here I will define configuration as per my requirements, but you can change it as per yours. Let’s first discuss what different parameters we have which we can tweak to generate output  file.

Setting Default Description
type standard The type of file to output.
label none An optional label for this output.
url none A S3, Cloud Files, GCS, FTP, FTPS, SFTP, Aspera, HTTP, or RTMP URL where we’ll put the transcoded file.
base_url none A base S3, Cloud Files, GCS, FTP, FTPS, SFTP, or Aspera directory URL where we’ll put the transcoded file, without a filename.
filename none The filename of a finished file.
format The output format to use
size none The resolution of the output video (WxH, in pixels).
width none The maximum width of the output video (in pixels).
height none The maximum height of the output video (in pixels).
quality 3 Autoselect the best video bitrate to to match a target visual quality.
video_bitrate none A target video bitrate in kbps. Not necessary if you select a quality setting, unless you want to target a specific bitrate.
audio_quality 3 Autoselect the best audio bitrate to to match a target sound quality.
audio_bitrate none A target audio bitrate in kbps. Not necessary if you select a audio_quality setting, unless you want to target a specific bitrate.
audio_sample_rate none The audio sample rate, in Hz.
max_audio_sample_rate none The max audio sample rate, in Hz.

Sample output configuration as follows:


{
"label": "high",
"url": "s3://bluepi-encoded.s3.amazonaws.com/sample-high.mp4",
"audio_bitrate": 160,
"audio_sample_rate": 48000,
"height": 720,
"width": 1280,
"max_frame_rate": 30,
"video_bitrate": 5000,
"h264_level": "3.1",
"format": "mp4",
"decoder_bitrate_cap": 3000,
"decoder_buffer_size": 8000,
"h264_reference_frames": 1,
"h264_profile": "main",
"forced_keyframe_rate": "0.1",
"decimate": 2,
"public": false
}

While encoding you can add your branding to your encoded videos, for that you just need to provide a simple configuration as part of your every output item and Zencoder will put it on your encoded video automatically, it is very easy to add this. Water mark parameters are as follows:

Setting Default Description
watermarks none Add one or more watermarks to an output video.
url none The URL of a remote image file to use as a watermark.
x -10 Where to place a watermark, on the x axis.
y -10 Where to place a watermark, on the y axis.
width Scale to height, or original image width. The scaled width of a watermark.
height Scale to width, or original image height. The scaled height of a watermark.
origin content Which part of the output to base the watermark position on.
opacity 1.0 Make the watermark transparent.

Here is sample for the same.

"watermarks": [{
"x": "0%",
"y": "10%",
"url": "s3://bluepi-images.s3.amazonaws.com/videos/logo-watermark-desktop.png"
}],

Here is full request which will create multiple videos with respective watermarks.


{
"input": "s3://bluepi-original.s3.amazonaws.com/sample.mov",
"notifications": [{
"url": "https://bluepi.in/encode/zencn",
"format": "json",
"headers": {
"videoId": 3
}
}],
"outputs": [{
"access_control": [{
"permission": "FULL_CONTROL",
"grantee": "admin@bluepi.in"
}],
"label": "high",
"url": "s3://bluepi-encoded.s3.amazonaws.com/sample-high.mp4",
"audio_bitrate": 160,
"audio_sample_rate": 48000,
"height": 720,
"width": 1280,
"max_frame_rate": 30,
"video_bitrate": 5000,
"h264_level": "3.1",
"format": "mp4",
"decoder_bitrate_cap": 3000,
"decoder_buffer_size": 8000,
"h264_reference_frames": 1,
"h264_profile": "main",
"forced_keyframe_rate": "0.1",
"decimate": 2,
"watermarks": [{
"x": "95%",
"y": "-92%",
"url": "s3://bluepi-images.s3.amazonaws.com/videos/logo-watermark-desktop.png"
}],
"public": false
}, {
"access_control": [{
"permission": "FULL_CONTROL",
"grantee": "admin@bluepi.in"
}],
"label": "main",
"url": "s3://bluepi-encoded.s3.amazonaws.com/sample-main.mp4",
"audio_bitrate": 128,
"audio_sample_rate": 44100,
"height": 480,
"width": 640,
"max_frame_rate": 30,
"video_bitrate": 2000,
"h264_level": "3",
"format": "mp4",
"decoder_bitrate_cap": 2250,
"decoder_buffer_size": 6000,
"h264_reference_frames": 1,
"h264_profile": "main",
"forced_keyframe_rate": "0.1",
"decimate": 2,
"watermarks": [{
"x": "95%",
"y": "-92%",
"url": "s3://bluepi-images.s3.amazonaws.com/videos/logo-watermark-desktop.png"
}],
"public": false
}, {
"access_control": [{
"permission": "FULL_CONTROL",
"grantee": "admin@bluepi.in"
}],
"label": "3gp",
"url": "s3://bluepi-encoded.s3.amazonaws.com/sample-3gp.mp4",
"audio_bitrate": 24,
"audio_sample_rate": 16000,
"height": 240,
"width": 320,
"max_frame_rate": 15,
"video_bitrate": 192,
"h264_level": null,
"format": "mp4",
"decoder_bitrate_cap": 300,
"decoder_buffer_size": 1200,
"h264_reference_frames": 1,
"h264_profile": "main",
"forced_keyframe_rate": "0.1",
"decimate": 2,
"watermarks": [{
"x": "93%",
"y": "-90%",
"url": "s3://bluepi-images.s3.amazonaws.com/videos/logo-watermark-mobile.png"
}],
"public": false
}]
}

You can refer detailed configuration of Zencoder at Encoding Settings.

5- Conclusion

We learned in this blog that how important is video content in these days and requirements of different format/quality of videos for different devices is increasing. Zencoder is fastest videos transcoding API available in market.

In above, discussion we learned how we can prepare simple request which is required to generate different outputs, either thumbnails or videos with branding.

6- References

by Barkat Dhillon

Class Inheritance Sucks, Alternative Exists!

I’ve been professionally developing OO applications for little over a decade now, and I’m yet to come across a seasoned developer who loves inheritance. Every time this topic is brought up, almost all my programmer buddies have a horror tale to share, or are so overwhelmed with the sad memories it brings to their hearts that all they can muster is a wry smile before changing the topic to something more fruitful and less controversial. So, if inheritance is so bad, why don’t programmers just banish it once and for all. Why don’t we stop using it completely? Is there another alternative that can save us from the dreadful days of debugging through an endless sea of hierarchical classes in a medium-to-large scale project?

Well, lo and behold! There is indeed an alternative. We’ll talk about it in a bit.

If It Is So Bad, Why Do We Still Use It?

Let’s dig a little deeper and look for an answer as to why programmers take to inheritance so naturally. And why did all of us OO programmers, stick to inheritance through a major part of our learning years? Let’s list down the reasons, no matter how whacky they may sound!

  1. Real World vs OO World: Inheritance is one of the basic tenets of the real world, and any OO language would be incomplete if it does not allow a way to map real world’s hierarchical classifications to be represented in the world of code. This is a good reason, and one argument in the favor of inheritance that you can never win. However, one thing I would like to emphasize here is that all objects in the real world can often be classified into multiple classifications based on the context. For example, dogs can be classified as canine creatures in one context, domestic animals in another, and living beings in yet another context.
  2. Code Reusability: Inheritance, in real world as well as Object Oriented principles, implies an inheritance of properties and behavioral characteristics from the parent. However, in its implementation, it inherits code! This undeniable fact is often misinterpreted by young and enthusiastic programmers as a promotion of code reusability. In other words, inheritance is good because it helps in minimizing code duplication. The sad part is that this misinterpretation is rarely corrected and most of us keep repeating the same mistake over and over again, that of using code reusability as the motivation behind using inheritance.

Code Examples Speak Louder Than Words

I can go on babbling about the weaknesses of inheritance, but nothing can help drive the point home other than a good code example. Let’s start with a simple parent class Mammal, that has 2 child classes that inherit from it – Human and Dog.


// -----------------------------------------------

interface IAmMammal
{
    void Walk();
    void Breathe();
}

// -----------------------------------------------

abstract class Mammal : IAmMammal
{
    public abstract void Walk();
    
    public void Breathe()
    {
        Console.WriteLine(“Breathing...”);
    }
}

// -----------------------------------------------

class Human : Mammal 
{
    public void Walk() 
    {
        Console.WriteLine(“Walking on 2 legs...”);
    }
}

// -----------------------------------------------

class Dog : Mammal
{
    public void Walk() 
    {
        Console.WriteLine(“Walking on 4 legs...”);
    }
}

// -----------------------------------------------

Now, what’s wrong with the above implementation, one may ask. Well, nothing much I say! However, once you start scaling this pattern for larger problems, it can easily become a nightmare. Let’s see how? Let’s say we would like to extend the Human class further to denote Man and Woman.


// -----------------------------------------------

class Man : Human
{
}

class Woman : Human
{
}

// -----------------------------------------------

I’m pretty sure we’re not sweating yet. No harm done, so let’s keep going. As we all know, we humans love to talk! Not so sure about dogs though, so we’d like to add a Talk() method only to Human class, and leave Dog class alone for now.

Hierarchical Implementation

// -----------------------------------------------

interface IAmMammal
{
    void Walk();
    void Breathe();
}

interface ICanTalk
{
    void Talk();
}

// -----------------------------------------------

abstract class Mammal : IAmMammal
{
    public abstract void Walk();
    
    public void Breathe()
    {
        Console.WriteLine(“Breathing...”);
    }
}

// -----------------------------------------------

class Dog : Mammal
{
    public void Walk() 
    {
        Console.WriteLine(“Walking on 4 legs...”);
    }
}

// -----------------------------------------------

class Human : Mammal, ICanTalk
{
    public void Walk() 
    {
        Console.WriteLine(“Walking on 2 legs...”);
    }

    public void Talk()
    {
        Console.WriteLine(“blah-blah”);
    }
}

// -----------------------------------------------

class Man : Human
{
    public void Talk()
    {
        Console.WriteLine(“Howdy!”);
    }
}

class Woman : Human
{
    public void Talk()
    {
        Console.WriteLine(“Hello!”);
    }
}

// -----------------------------------------------

As we can see, we’ve not even started and we’re already dealing with a mess here – hierarchy of 4 classes, 2 interfaces and different levels of method implementations. We already know deep inside our hearts that this is not going to end well. The above pattern is going to be hard to manage, hard to debug and hard to test. Not sure about you, but I’m for sure feeling the heat!

Alternate Implementation

So, let’s see if we could’ve done better. Here’s an alternative implementation involving no hierarchies whatsoever.

I Can Walk


// -----------------------------------------------

interface ICanWalk
{
    void Walk();
}

class WalkOn2Legs : ICanWalk
{
    Console.WriteLine(“Walking on 2 legs”);
}

class WalkOn4Legs : ICanWalk
{
    Console.WriteLine(“Walking on 4 legs!”);
}

// -----------------------------------------------

I Can Breathe


// -----------------------------------------------

interface ICanBreathe
{
    void Breathe();
}

class BreatheUsingLungs : ICanBreathe
{
    public void Breathe()
    {
        Console.WriteLine(“Breathing using my lungs...”);
    }
}

// -----------------------------------------------

I Can Talk


// -----------------------------------------------

interface ICanTalk
{
    void Talk();
}

class ManTalk : ICanTalk
{
    public void Talk()
    {
        Console.WriteLine(“‘Sup Bro!”);
    }
}

class WomanTalk : ICanTalk
{
    public void Talk()
    {
        Console.WriteLine(“Hello!”);
    }
}

// -----------------------------------------------

I Am Alive


// -----------------------------------------------

interface IAmMammal : ICanWalk, ICanBreathe
{
}

class Mammal : IAmMammal
{
    ICanWalk _walkingStyle;
    ICanBreathe _breathingStyle;

    public Mammal(ICanWalk walkingStyle, 
                  ICanBreathe breathingStyle)
    {
        _walkingStyle = walkingStyle;
        _breathingStyle = breathingStyle;
    }

    public void Breathe()
    {
        _breathingStyle.Breathe();
    }

    public void Walk()
    {
        _walkingStyle.Walk();
    }
}

// -----------------------------------------------

I Am Human


// -----------------------------------------------

interface IAmHuman : IAmMammal, ICanTalk
{
}

class Human : IAmHuman
{
    IAmMammal _mammal;
    ICanBreathe _talkingStyle;

    public Human(IAmMammal mammal, ICanTalk talkingStyle)
    {
        _mammal = mammal;
        _talkingStyle = talkingStyle;
    }

    public void Talk()
    {
        _talkingStyle.Talk();
    }

    public void Walk()
    {
        _mammal.Walk();
    }

    public void Breathe()
    {
        _mammal.Breathe();
    }
}

// -----------------------------------------------

I Am Dog


// -----------------------------------------------

interface IAmDog : IAmMammal
{
}

// -----------------------------------------------

Now let’s construct the required objects using the new implementation.


// -----------------------------------------------

var dog = new Mammal(new WalkOn4Legs(),
                     new BreatheUsingLungs());

var human = new Mammal(new WalkOn2Legs(),
                       new BreatheUsingLungs());

var man = new Human(human, new ManTalk());
var woman = new Human(human, new WomanTalk());

// -----------------------------------------------

What the F***?

So, now you might be asking one or all of the following questions:

  1. What the hell just happened?
  2. Did he make a simple hierarchical implementation unnecessary complicated?
  3. What do I gain out of this zero-hierarchy mess of classes and interfaces?

Let Me Explain

The alternate implementation above uses what we call strategy pattern, whereby we inject a particular strategy or behavior while constructing an object, and that behavior in turn defines how the object will behave. For example, we know that dog is a mammal that walks on 4 legs, and uses its lungs to breathe. Similarly, human is a mammal that walks on 2 legs, and uses its lungs to breathe. This is exactly what the following 2 lines represent:


// -----------------------------------------------

var dog = new Mammal(new WalkOn4Legs(),
                     new BreatheUsingLungs());

var human = new Mammal(new WalkOn2Legs(),
                       new BreatheUsingLungs());

// -----------------------------------------------

On similar lines, we know that both men and women are 2-legged mammals with different talking styles. And here’s how we construct the appropriate objects:


// -----------------------------------------------

var man = new Human(human, new ManTalk());
var woman = new Human(human, new WomanTalk());

// -----------------------------------------------

But Why?

The beauty of the alternate solution lies in the fact that it clearly keeps all the individual behaviors (like walking, talking, breathing) totally de-coupled from each other as well from the classes that use these behaviors (like Mammal, Human). This allows all these individual classes to be fully functional standalone components, that can be tested, used (or re-used), and extended independently. Personally, I love the fact that I can completely unit-test all those components with very simple unit-tests using a good mocking framework. I’ll talk about that in a subsequent blog post.

Comparing Various Web Video Players

Web sites have been growing from a few static html pages with embedded images to interactive live portals.Along with this transition videos have become an integral part of websites, as evidenced by the success & popularity of video sharing sites like youtube , dailymotion , vimeo etc.. As more and more such sites compete for the users, embedded videos are becoming ever more popular. On simple way to include videos on your site would be to embed youtube video links. However if you are more adventurous and need more control you may want to host the videos on your site itself. Having built a complete video streaming platform we are going to post a series of blogs on the challenges of storing, streaming and playing videos. In this post I would compare various video players for their differences, strengths and weaknesses.

I have picked few popular web video players today and will evaluate them on some basic parameters.

VideoJS : http: //www.videojs.com/

The best part of using it is that VideoJS is Free!

Setup Setting up VideoJS is fairly simple and as their site states, a 5 min task. Download the js (video.js) and css file, include it in head ( tag) of html or directly add the CDN (Content Delivery Network) reference in html.

    <link href="http://vjs.zencdn.net/4.2/video-js.css " rel="stylesheet ">
    <script src="http://vjs.zencdn.net/4.2/video.js"></script>

VideoJS uses the HTML5 video tag to embed a video and provides some attributes which can be added in video tag.

    <video id="example_video_1" class="video-js vjs-default-skin" controls preload="auto" width="640" height="264" poster="http://video-js.zencoder.com/oceans-clip.png " data-setup='{"
    example_option ":true}'>
        <source src="
    http://video-js.zencoder.com/oceans-clip.mp4" type='video/mp4' /> <source src="http://video-js.zencoder.com/oceans-clip.webm" type='video/webm' /> <source src="http://video-js.zencoder.com/oceans-clip.ogv" type='video/ogg' /> </video>
    

Skinning Model Video Skin is the look and feel of the video player. VideoJS provides a default skin while custom skin can also be created using CSS.

Playlist VideoJS does not provide in-built feature to create playlists but there are many independent plugin available which can be used with VideoJS to create playlists like https://github.com/cfx/videojs-playlist and https://github.com/tim-peterson/videojs-playlist etc.

Player Playback Playback technology refers to the technology / plugin which browser uses to play video, which can be either HTML5 or Flash or some other technology.

 <video data - setup = '{"techOrder": ["html5", "flash", "other supported tech"]}'..

As per above tag, browser will try to use html5, in case html5 is not available, video will fallback to flash and then to some other technology. Analytics VideoJS does not provide in-built analytics tool or integration with Google analytics but it can be achieved using JS.

Subtitles/Captions
VideoJS supports subtitles in WebVTT format.Adding vtt file in track tag attaches subtitle text file.

 <video id = "example_video_1"
            class = "video-js vjs-default-skin"... < track kind = "captions"
            src = "http://example.com/path/to/captions.vtt"
            srclang = "en"
            label = "English"
            default > </video>

Support
Since VideoJS is open source and free, it provides support only through forums.

Sublime Video: http://sublimevideo.net/

Sublime is probably the best looking player of the lot. It recently became free after becoming part of dailymotion.

Setup To use sublime video, one needs to register at sublime video site. Once registered, sublime provides a unique code to access the sublime js. Something like:
 

 <script type = "text/javascript"
            src = "//cdn.sublimevideo.net/js/xxxxxx.js" > </script>

Sublime too uses video tag to embed video in page:
 

 <video id = "sublimeVideoId"
            poster = "https://cdn.sublimevideo.net/vpa/ms_800.jpg"
            width = "640"
            height = "360"
            title = "Midnight Sun"
            data - uid = "a240e92d"
            preload = "none" > &nbsp; <source src = "https://cdn.sublimevideo.net/vpa/ms_360p.mp4" /> &nbsp; <source src = "https://cdn.sublimevideo.net/vpa/ms_360p.webm" / > </video>

Subtitles Sublime supports WebVTT (.vtt) as well as SubRip (.srt) file formats for subtitles
   

                        <video id="sublimeVideoId" .... <track src='/subs/ video-en.srt ' srclang='en '>
                            <track src=' /subs/video-fr.vtt ' srclang='fr'>
                        </video>
                        

SD/HD Switching
Sublime provides an in-built feature to switch the video quality between SD and HD.

Keyboard Control
Sublime gives in-built keyboard controls for the player to adjust the volume, play/pause the video etc.

Custom Logo
User can display a custom logo on video player instead of the sublime logo. But this feature is only available in paid versions. Adding data-logo-link-url attribute in video tag enables the user to give url of logo.

Analytics
Sublime provides in-built integration with Google Analytics. Videos can then be categorised by various events start/end of video, sharing the video etc. In order to track a video, a unique id should be provided to each video using data-uid attribute.

Social Sharing
Sublime videos can be shared across different social networking sites using an in-built feature. Attribute data-sharing-url takes the url of the video (to access the video independently).

Cue Zones
An in-built feature of Sublime lets user to divide video in multiple cue zones which means that user can perform certain actions when he enters/exits the cue zone. It is essentially useful to insert advertisements, quiz etc. in-between video.

Playlist
In Sublime playlist can be created by accessing SublimeVideo object in js and adding videos using SublimeVideo.playlists() function.

Streaming Protocols
SublimeVideo only supports videos delivered through HTTP.

WordPress
It provides plugin to integrate SublimeVideo with WordPress site.

Support
Sublime provides product support through its own forums and support group.  

JW Player : http://www.jwplayer.com/

Setup
Registering at jw player site will generate a token and using that token jw player js can be included in html head tag.
 

                        <script src="http://jwpsrv.com/library/YOUR_JW_PLAYER_ACCOUNT_TOKEN.js" ></script>
                        

Unlike VideoJS and SublimeVideo which needs html5 video tag to embed player, jw player can be embedded in a plain div using jwplayer js object.

 <span style="font-size: 1rem; line-height: 1;">&nbsp;</span >
<div id="myElement">Loading the player... 

                            <script type = "text/javascript" >
                            jwplayer("myElement").setup({
                                file: "/uploads/myVideo.mp4",
                                image: "/uploads/myPoster.jpg"
                            });
                        </script>

Skinning Model
JW Player provides 8 different flavors of skins but it comes only with licensed version. Along with that users can create custom skins as well.

Custom Logo

Licensed versions of JW Player provides the feature to display custom logo on video instead of jw player logo.
Playlists

An in-built feature to create playlist lets user to load multiple videos in single player instance. Playlists can contain one or more items and each item can have its own title and subtitles files attached. Apart from creating playlist directly in jwplayer instance, it can be loaded from RSS feed as well.
Subtitles

JW Player supports WebVTT, SRT and DFXP file formats for subtitles. It also provides inline styling of the captions to change the fonts, color etc.
Analytics

Along with integration of Google Analytics, jw player also provides its own analytics service. All the data is available on Analytics tab of the user’s jw player account.It is available in all the versions including free one.

Google Analytics can be included while setting up the player, as below:

 jwplayer("myElement").setup({
                file: "/uploads/example.mp4",
                image: "/uploads/example.jpg",
                ga: {
                    idstring: "title",
                    trackingobject: "pageTracker"
                }
            });
            

Streaming Protocols JW Player supports streaming through both HTTP and RMTP, which essentially means that it provides a mechanism to use either html5 or flash as primary option and then fallback to another one if primary option is not supported by browser.

Social Sharing
JW Player provides option to share video on different social networking sites like facebook, twitter etc. But this feature is available only in paid editions.

Advertising
Advertisements are only supported in Enterprise edition.It supports VAST/PAID
and Google IMA.

WordPress
JW Player provides WordPress plugin to embed videos through WordPress directly. 

Support
JW Player  provides product support through forums and support group, though support group is only available
for licensed versions.

Flowplayer : http://www.flowplayer.org/

Setup
To start with, flowplayer provides a js and css, which can be included in the html head. Since Flowplayer uses jquery for it javascript API, jquery should also be included.

                            <link rel="stylesheet " href="http://releases.flowplayer.org/5.4.6/skin/minimalist.css" </span>
                <script src = "http://ajax.googleapis.com/ajax/libs/jquery/1/jquery.min.js"> </scrip><script src="http://release.flowplayer.org /5.4.6/flowplayer.min.js ">
                            </script>
                            

Flowplayer uses html5 video tag to display play

               <video>
                                    <source type="video/webm " src="http://mydomain.com/path/to/intro.webm">
                <source type="video/mp4" src = "http://mydomain.com/path/to/intro.mp4" > < /video>
                             
                            

Flowplayer provides a global configuration js object flowplayer.conf to set the path of flowplayer libraries, urls of custom logo used etc.
Skinning Model

Flowplayer provides a good range of skinning for the player.It provides three types of css namely < b > minimalist < /b>, functional and playful . Along with that user can override the defaults skins using custom css.

Custom Logo Flowplayer supports the custom logo, which is available only in commercial version.

Analytics Flowplayer provides in-built integration with Google Analytics. A unique analytics id should be associated with video to track and view usage data.

Playlist Flowplayer has in-built support to create playlists.If analylicts is enables, each video in playlist is tracked separately.

                             
                                <a href="http://mydomain.com/video1.mp4 ">
                                    <a href="
                http: //mydomain.com/video2.mp4"></a>


                

Cuepoints Like SublimeVideo, Flowplayer also provides cuepoints to enable user to perform custom actions at pre-defined points.

Subtitles Flowplayer supports WebVTT (.vtt) file format to enable subtitles on video. Unlike JW Player, Flowplayer does not provide any UI element to control subtitles, which means that if subtitle file is added in track tag, there is no way to disable it.

Streaming Protocols Flowplayer supports streaming through RTMP and HTTP both.

Social Sharing Flowplayer provides a separate plugin to share the video on different social networking sites like facebook, twitter etc.

Advertisements Flowplayer supports Google AdSense to display advertisements in the video. AdSense plugin for Flowplayer must be loaded to user this feature.

                            <link rel="stylesheet" href="//releases.flowplayer.org/asf/1.0.0/asf.min.css" media="
                    screen ">
                            <script src="http://s0.2mdn.net/instream/html5/ima3.js"></script>
                    <script src = "http://releases.flowplayer.org/asf/flowplayer.org/1.0.0/asf.min.js"></script>

Support
Flowplayer provides product support using forums and support tickets. Ticketing is only available in Commercial version.

Conclusion

If the videos are not the major part of the site or just to check out how embedded videos work, one can start with VideoJS, which is free and provide all the essential functionality. But to use advanced features, its better to settle for the commercial versions from any one of SublimeVideo, JW Player or Flowplayer.

Out of these three also JW Player has a vast user base being used by ESPN, AT&T, Stanford University and many more.

References

  1. Flow PLayer
  2. Sublime Video
  3. VideoJS
  4. JWPlayer