Introduction to Accumulators : Apache Spark


Whats the Problem  : Function like map() , filter() can use variables defined outside them in the driver program but each task running on the cluster gets a new copy of each variable, and updates from these copies are not propagated back to the driver.

The Solution : Spark provides two type of shared variables.

1.    Accumulators
2.    Broadcast variables

Here we are only interesting in Accumulators . if you wants to read about Broadcast variables then you can refer this Blog

Accumulators provides a simple syntax for aggregating values from worker nodes back to the driver program. One of the most common use of Accumulator is count events that may help in debugging process.

Example :-  to understand Accumulators better lets take a example of football.

Div Date HomeTeam AwayTeam FTHG FTAG FTR
D1 14/08/15 Bayern Munich Hamburg 5 H
D1 15/08/15 Augsburg Hertha 1 A

View original post 337 more words

Posted in Uncategorized | Leave a comment

Configure turn server for WebRTC on Amazon EC2


As we all know, WebRTC is used for video communication.
In video communication, data packets are transferred from one place to another place, therefore a user is able to see other user’s streaming.

But sometimes, when there are some network securities like firewall, then data packet does not transfer and we do not get proper streaming of another user i.e. we get black screen as other user’s stream.

So for this solution, we use turn server.

The TURN Server is a VoIP media traffic NAT traversal server and gateway. It can be used as a general-purpose network traffic TURN server and gateway, too.

Here, I am going to explain you the steps of installing and configuring turn server on Amazon EC2.

First of all download these 2 packages :
libevent-2.0.21-stable.tar.gz (
turnserver- (

then run these commands :
1. To install libevent package
$ tar xvfz libevent-2.0.21-stable.tar.gz
$ cd libevent-2.0.21-stable

View original post 163 more words

Posted in Uncategorized | Leave a comment

Neo4j with Scala : User Defined Procedure and APOC


In the last blog Getting Started Neo4j with Scala : An Introduction which got the overwhelming response from Neo4j and Dzone. We discussed about how to use Neo4j with Scala. For recap Blog and the Code. Now we are going to take one step ahead .

As we know that in the Relational Database, Procedure provide advantages of better performance, scalability, productivity, ease of use and security.

In the Neo4j we also used APOC and User Defined Procedure which provide same advantages which we get in the Relational Database.

User Defined Function

We used user defined procedure in the Relational Database which we store in the database and call from there whenever needed. In Neo4j we also do the same thing. Here we create procedure method with the @Procedure annotation.

When we annotated with @Procedure, it takes any Cypher Type as parameters and returns Stream of Data…

View original post 797 more words

Posted in Uncategorized | Leave a comment

Scala-IOT : Introduction to Internet Of Things.


Recently this word IOT is gaining lot of popularity. And we see a lot of news on it like the world is moving towards IOT , and its the next big thing and smart cities are no longer a fiction  and many other news like that.

As we are also a part of this world 😉 so we start digging into this and start exploring this land of new opportunities. So lets start with first things first,

What is IOT?

As wikipedia says, The internet of things (IoT) is the network of physical devices, vehicles, buildings and other items—embedded with electronics, software, sensors, actuators, and network connectivity that enable these objects to collect and exchange data.

So basically the vision is the world where each and everything is connected to the Internet and can be controlled from anywhere, these things can be anything…

View original post 738 more words

Posted in Uncategorized | Leave a comment

Spark Session: New Entry point in Spark 2.0


Finally, after a long wait, Apache Spark 2.0 got released on 26 July 2016, Tuesday. This release is built upon the feedback got from industry, in past two years, regarding Spark and its APIs. This means it has all what Spark developers loved to use and all that which was not liked by developers has been removed.

Since, Spark 2.0 is a major release of Apache Spark, it contains major changes to APIs and libraries of Spark. In order to understand the changes in Spark 2.0, we will be looking at them one by one. So, lets start with Spark Session API.

For a long time, Spark developers were confused between SQLContext and HiveContext, i.e., when to use what. Since, HiveContext was more rich in features than SQLContext, many developers were in favor of using it, but HiveContext required many dependencies to run so, some favored SQLContext.

To end this confusion founders of Spark came up…

View original post 180 more words

Posted in Uncategorized | Leave a comment

Compile & load an external scala file in your application


Some days ago, i got a requirement that i need to load an external scala file in my Scala application and have to use that. To use that file, it needs to be compile first and then can be used in the application.

The location of that external file can be anything apart from your project.

The structure of external file is fixed and should be define in your project i.e. class name and method signature.

Arguments and return type of the method can be vary according to our requirement.

Firstly we will define the dependencies in build.sbt :

Now, we need to define the code which will compile and load the external file in our application.

filePath is the path of external file which needs to be loaded.
You will notice the import statement as well :

This import is there because abstract class ExternalProcessing needs to be import…

View original post 43 more words

Posted in Uncategorized | Leave a comment

A basic application to handle multipart form data using akka-http with test cases in Scala


In my previous blogs, I have talked about file upload and its test cases using akka-http.

Now, in this blog I am going to explain the handling of multi part form data in akka-http with the help of a basic application which contains the code and its test cases.

So Let’s start with the dependencies for the application.
You have to add the following dependencies in you build.sbt :

Now, we have to create a handler to handle multipart form data :

Whenever a file will come in request, a new file will be created in temp directory of your system.

Below, you can find the test cases for the handler :

I hope, you enjoyed it and it will be helpful for you.

You can get full code here

Happy Blogging !!!

View original post

Posted in Uncategorized | Leave a comment