Concurnasis a new open source JVM programming language designed for building concurrent and distributed systems. Concurnas is a statically typed language with object oriented, functional, and reactive programming constructs.
With a concise syntax that hides multithreaded complexity, and native support for GPU computing, vectorization, and data structures like matrices, Concurnas allows for building machine learning applications and high performance parallel applications. In addition, Concurnas provides interoperability with other JVM languages like Java and Scala. Concurnas supports Oracle JDK and OpenJDK versions 1.8 through to the latest GA release 14.
InfoQ spoke toJason Tatton, creator of Concurnas and the founder of Concurnas Ltd., about the language, some of its design decisions, and features.
Jason: During the first phase of my career I worked in investment banking running teams and building trading models and systems for high frequency trading. I saw that the engineering problems which we were solving on a day to day basis were mostly centered around building reliable scalable high performance distributed concurrent systems. I found that the current thread and lock with shared mutable state model of concurrency exposed within most popular programming languages (such as Java and C++) was just too difficult for even exceptionally talented world class engineers to get right. I thought to myself, "there has to be a better way of solving these sorts of concurrent problems". So I quit my job in 2017 and set out to solve this problem andConcurnasas a programming language for making concurrent programming easier was born.
Jason: There is much innovation to be made in the areas of both runtimes/virtual machines (such as LLVM or the JVM) and host languages. Building either a new language or virtual machine is a massive undertaking and realistically, especially when working with a small team and a tight deadline, one must choose to focus on one area or another.
With Concurnas I chose to focus upon the language and so this left the decision of which runtime to use. Performance wise LLVM and the JVM are similar. In the end the JVM was chosen for two reasons: 1). the JVM is the most popular and widely distributed virtual machine on the planet - most enterprises use Java and so have an established use case for the JVM, 2). There is a large body of existing enterprise scale code written in JVM languages such as Java, Scala and Kotlin which are crying out for leverage within a language that can provide a model of concurrency which is easier to understand and use. By implementing Concurnas on the JVM, users are afforded the capability to utilize all of their existing JVM language code. We also gain access to the Java standard library and so do not have to create one from scratch in support of the language.
Jason: As Concurnas is designed primarily as a language to make concurrent programming easier for everyone we started to call it Concur, for "Concur-rent". Later on the "-nas" was tagged on to make "Concurnas" as this sounded nicer.
Jason: Concurnas is designed for solving concurrent, parallel and distributed computing problems. To a large extent the sorts of problems Concurnas is good at solving exist within both the systems and application domains of programming. One thing we are researching now is adopting a moreRustlike model of memory management in order to give users who need to do more low level memory management the opportunity to do this whilst having the existing functionality of automatic memory management with garbage collection in Concurnas to fall back on if they so wish.
Jason: Although the JVM and the Java language offers tremendous performance, it has not been widely adopted for ML applications, this is unfortunate. I believe this may be because of the verbosity of the Java language. Concurnas solves a lot of these problems so on that basis I would say that it's an excellent candidate thanks to its model of concurrency and first class citizen support for GPU computing - things which are tremendously beneficial for ML applications.
Furthermore we're looking at adding first class citizen support for automatic differentiation to the language. In this way Concurnas will become the second language, in addition toApple Swift, to support this feature at the language level. This will be of tremendous benefit for implementing ML algorithms and those users working in finance for derivatives calculations.
Jason: The GPU's which are present in modern graphics cards can be leveraged for general purpose massively parallel computation. It is common for algorithms which leverage GPU's to improve computational speed by up to 100x vs a conventional single core CPU based algorithm. Furthermore, per FLOP this computation is provided with a much reduced power consumption and hardware cost vs a CPU based implementation. Traditionally, users have had to learn C/C++ in order to leverage the GPU for general purpose computation, which for many presents a large barrier for entry. Concurnas has first class citizen support for GPU computing built within it. Users can write idiomatic, normal looking Concurnas code and have that code run directly upon the GPU without having to first learn C/C++.
Providing this feature was very satisfying for two reasons. 1). GPU computing can make a real difference in so far as reducing the environmental costs of computation is concerned. Providing language level support for this opens up the floodgates for developers to start leveraging GPU hardware and reaping the benefits. 2). On a technical level the GPU computing component of Concurnas is predominantly written in Concurnas language code itself! This presented some interesting technical challenges concerning bootstrapping the compilation of the Concurnas language compiler but it was important to do this in keeping with the "eat your own dog food" principle.
Jason: The core concurrency primitive exposed within Concurnas is the isolate. Isolates are light weight threads which can perform computation concurrently. All code in Concurnas is executed within isolates. Isolates cannot directly share memory between one another, dependent data is copied into isolates at the point of creation - which prevents accidental sharing of state that could otherwise lead to unscalable, non deterministic program behavior. Controlled communication of state between isolates is achieved via the use of special objects known as refs, support for which is provided by the type system of Concurnas itself.
Whereas with raw threads we are bounded by the restrictions of our JVM in terms of the number we may spawn, with isolates we are bounded only by the amount of memory our machine(s) have access to. In this way isolates scale much better than raw threads. In terms of execution, isolates are multiplexed and run in a cooperative manner as continuations. When an isolate hits a point where it is unable to continue computation due to waiting for data from another isolate (communicated via a ref) it will yield execution of its underlying raw thread such that that thread may execute a different isolate. Whether we are executing on a single core or a many core machine the fundamental model of concurrent execution remains the same.
A side effect of this model is that we are able to use refs in support of the reactive programming paradigm. We can create special isolates (via the `every` and `onchange` keywords or the relevant compact Concurnas syntax) which react and trigger on changes made to one or many input refs and optionally return refs themselves - thus creating for ourselves reactive graphs of calculation. This is a natural way of solving concurrent problems.
Jason: The build out of Concurnas was started in 2017 with the first production release as open source under the MIT license in December 2019. Concurnas is now production ready and at Concurnas Ltd. we are able to offer commercial support packages for all sizes of organization which require it. There is also a growing body of free community resources concerning the language available on the internet.
Jason: Since the inception of Concurnas, three years ago, a lot has been achieved. It's very exciting to imagine what we will be able to achieve as the community continues to grow in the next five, ten or thirty years! In the immediate future, as previously mentioned, we are looking at adding automatic differentiation to the language. In addition to this we are looking at improved support for off heap memory management for working with large data sets and an improved GPU computing interface. Finally we are looking at providing developer tool support in the form of IDE support for Jupyter notebook, VS Code, IntelliJ and Eclipse.
We are very much community focused, open to feedback and committed to satisfying the needs of our customers that are actively making use of the language. To this end we would love to hear from any readers on what they would like to see in the Concurnas programming language, please feel free to get in touch via one of the methods listed on theConcurnas website.