Scalability is the primary focus of Crossroads I/O project. The idea is that by using it, your applications become scalable even though you haven't explicitly strived for scalability. Let's illustrate the point using couple of examples.
Imagine that you are using Crossroads I/O for communication between two threads in a single process:
At some point, the load becomes too high to be handled by a single machine. The transparent transport semantics and opaque API for transports allow you to separate the two threads and move them to different machines, without having to change the business logic of your application:
Now imagine that later on even using two machines isn't enough to handle the load. Component B uses 100% of the CPU time on its dedicated machine and becomes the bottleneck of the whole system.
Crossroads messaging patterns (publish/subscribe, request/reply etc.) are designed in such a way that you can run additional instance of component B without changing your application:
When scaling further, you can add a third instance, a fourth one etc. If needed you can run 100 or 1000 instances of component B, still without a modification to your code.
When scaling even further a moment may come when the bare overhead of transporting the data becomes a bottleneck. For example, in publish/subscribe pattern the data link between component A and the instances of component B may become exhausted. This is especially likely if the components are located at different sites. WAN links are slow and have limited bandwidth.
To solve this problem we can insert so called "device" (a simple application using two Crossroads sockets) into the topology and thus reduce the bandwidth usage by possibly even orders of magnitude by not sending duplicate data over the WAN link: