The amazing Akka project was started by Jonas Bonér in 2009 with the aim to bring the actor model , which has proven to deliver an availability of six nines (99.9999%) and even more, to the JVM. Akka, which is open source and available under the Apache 2 license , offers APIs for both Java and Scala. If you are interested in Akka’s history, take a look at the Akka 5 years anniversary blog post.
Over the years Akka has matured, is widely used and recently even won the 2015 JAX Award for Most Innovative Open Source Technology. Since its early days Akka has grown a lot which can easily been seen by looking at the number of sub-projects under the root project on GitHub .
So why should you consider using Akka? What does it offer? In this blog post we take a look at the most important sub-projects and their features from a bird’s eye perspective in order to give you an overview of Akka’s overall capabilities. We are planning – no promise made – to take some deep-dives in follow-up posts.
The akka-actor module is Akka’s heart and soul, it’s the foundation on top of which all other modules and features are built. Essentially, it provides an implementation of the actor model without any notion of remoting, cluster awareness, persistence, etc.
Interestingly Jonas Bonér once told me that remoting, which initially was an integral part of Akka actors , would never be factored out into some sub-module – as you can see, some things change. What remained, though, was the design for distribution: In Akka, everything is distributed by default. The network and its peculiarities are not hidden away, but instead embraced.
So what does akka-actor, which defines actors as the fundamental building blocks of your programs, give you? Here are the main features:
- Loose coupling through share-nothing and asynchronous messaging
- Resilience because of compartmentalization and delegation of failure handling
- Elasticity thanks to location transparency
Before we take a closer look at these features, we want to encourage you to read the Reactive Manifesto which describes typical requirements and traits of “modern” systems, e.g. highly available websites or other mission critical servers. It’s a quick read and while not entering completely unknown terrain, it defines a coherent vocabulary to talk about some of the things which matter in IT today.
Let’s get back to the features of Akka actors. Basically, in Akka everything is an actor and – according to Carl Hewitt, the inventor of the actor model, “one actor is no actor” – they come in systems. Actors share nothing, i.e. they discard the “shared” from “shared mutable state” – the root of all evil from a concurrency perspective. Actors exclusively communicate via asynchronous messages which – along with the share nothing approach – leads to rigorous decoupling and gives the other party the chance to be temporarily unavailable. Contrast that with synchronous method calls like known from mainstream imperative OO programming: Until the called object gets back with a return value, the caller is blocked. Ouch!
Another nasty thing which might happen when using synchronous method calls are exceptions. Well, on the one hand you know that something went wrong. But on the other hand it has become your responsibility to take action to fix the problem. To make this more obvious, think of a vending machine which took your money but didn’t deliver the snack you eagerly wanted to eat. What do you do? Maybe kick the machine, but certainly not fix it, that’s someone other’s job. Most probably you’ll survive without the snack or just try to find some other machine that works.
With actors, in the case of failure, you just don’t get an answer to your message – that’s like not getting your snack. But the failure is delegated to some other actor that supervises the faulty one, because in Akka every actor has a parent which supervises all its child actors. It’s the supervisor’s responsibility to decide how to proceed with the faulty actor, e.g. restart or stop it. As a result, communication – sending a message and hoping for a response – is decoupled from failure handling. That means that failure is restricted to the faulty actor and its supervisor and it doesn’t spread towards the caller. In other words, failure is compartmentalized, which means that only a part of the system is affected instead of the whole one.
Last but not least, it’s not important to know the physical location of an actor to talk to it – this is called location transparency. That’s because each actor has a logical address you use to talk to it; it’s physical location is hidden from you, decoupling you from it. Therefore, even if an actor resides on a remote node – which requires using Akka Remoting mentioned below – someone can send messages to the remote actor’s address without being aware of the fact that the actor isn’t part of the local actor system.
To sum it up, Akka actors enable you – while on a very low level – to write systems which are quite reactive. Of course you need distribution for real resilience and scalability, but Akka actors bring all the foundations needed – the rest is covered by other modules and features.
akka-remote is an extremely important module, because it enables remote communication and real location transparency. But apart from a couple of configuration settings, it keeps back and essentially works as an enabler.
If you want to enable remoting, you just have to override some default configuration settings:
// The default is "akka.actor.LocalActorRefProvider"
provider = "akka.remote.RemoteActorRefProvider"
hostname = "127.0.0.1" // that's the default
port = 9001 // the default is 2552
Essentially, you just have to configure the RemoteActorRefProvider. This allows you to have actors deployed on remote actor systems, including remote death watch, failure detection, etc. While this is fantastic, it’s too low level for most cases, because it requires you to know the exact remote addresses of the collaborating actor systems.
This is where Akka Cluster – which is comprised of several modules, e.g. akka-cluster, akka-cluster-tools or akka-cluster-sharding – gets into the game. At its core it provides a membership service which allows actor systems to join and/or leave a cluster. Any actor can register as a listener for cluster events, e.g. MemberUp or MemberRemoved, which allows these actors to dynamically gain knowledge about potential remote communication partners. In order to provide a consistent view of the current cluster state, a distributed failure detector monitors the health of the individual member nodes and possibly declares member nodes unreachable which results in UnreachableMember events.
While you can use the cluster events directly, you most probably encounter them implicitly, because they are the foundation of a couple of higher-level features, e.g.:
- Cluster-aware routers: routees can either be created or looked up on remote member nodes
- Cluster Singleton: only one instance of a particular actor in the cluster
- Cluster Sharding: distribute a potentially large number of actors across the member nodes
- Distributed Data: consistent data replication without central coordination based on CRDTs
There are various reasons for an actor to get restarted, e.g. in reaction to a program failure (exception) or a hardware or network failure (remote nodes becoming unavailable). As actors totally hide away their internal state – if any –, the only general way to restore an actor into the same state after restarting is via sending it the same messages like before.
Obviously this is a great fit for Event Sourcing and that’s exactly what Akka Persistence is all about: restoring an actor’s state by applying the concepts of Event Sourcing. Therefore it distinguishes between commands and events. If a persistent actor receives a command, it might create an event, ask Akka Persistence’s journal – there are numerous journal backends, e.g. based on Cassandra or Kafka – to persist it and once that’s confirmed apply the event to its state. During recovery all the events are replayed which leads to the same state like before. Of course there’s also support for snapshots to avoid long recovery times for large numbers of events.
Akka Streams and Akka HTTP
Akka Streams and Akka HTTP are the new kids on the block. They are still experimental and not yet part of the “official” Akka distribution, meaning that they have their own version number – 1.0 at the time of writing this post. It’s planned to make them proper citizens of Akka 2.4, though, which is supposed to be released in the foreseeable future.
Akka Streams is an implementation of Reactive Streams which has been specified and implemented by a number of parties including Reactor, RxJava, Slick and Vert.x. Reactive Streams is all about asynchronous stream processing with non-blocking back pressure and Akka Streams – obviously – uses actors for the implementation.
A perfect use case for Akka Streams is Akka HTTP which is the evolution of the very successful spray project: A HTTP server accepts a stream of HTTP requests and produces a stream of HTTP responses. Also, the bodies of HTTP entities which essentially are one or even more chunks of data, can be nicely expressed as streams of bytes.
We have given an overview of a couple of Akka modules from the very low-level and essential Akka actors which “simply” implement the actor model up to high-level abstractions like Akka Cluster, Akka Persistence and Akka HTTP which are all built on top of the foundation provided by Akka actors. Therefore each of the modules gives you the benefits of the actor-model: loose coupling, resilience and elasticity.
As already mentioned we are planning to write some follow-up posts which cover the individual modules in greater depth. Questions and feedback are highly appreciated.
Your job at codecentric?
More articles in this subject area
Discover exciting further topics and let the codecentric world inspire you.