Reducing Environmental Complexity of External DSLs with Projectional Language Workbenches

As part of my Computer Science PhD, I have written a poster to outline the research area that I’m currently exploring and the possible future directions. If you’d like to ask any questions or would like to discuss the research area, please leave a comment or send an email.

Reducing Environmental Complexity of External DSLs with Projectional Language Workbenches

Software engineering is defined as applying the ideologies of engineering to software development. This area of engineering has to take a holistic view of the environmental complexities to avoid expensive software bottlenecking. Where by, the growing demand for more complex software outstrips the developers. (Banker, 1987) To date, various methods have been developed to alleviate this bottleneck. The standard technique is a divide and conquer approach through implementing higher levels of abstraction, such as libraries and OO frameworks (Van Deursen, A., and Klint, P., 2000). Although, a relatively recent approach is to develop domain specific languages (DSLs).

DSLs are smaller languages targeted at a particular domain; they are a way of controlling an abstraction. Fundamentally, DSLs provide a high-level set of features that are closely aligned with the problem domain, allowing an easier mapping of the developers conceptual model to the programming implementation. DSLs fall into two main categories, internal and external. Internal DSLs are built within a general purpose programming language (GPL). While external DSLs are built independently of a GPL. They can offer much more syntactic flexibility than internal DSLs, but at the cost of building a parser, provide a programming environment, maintain the language and repeating functionality of GPLs (Fowler and Parsons, 2010).

A possible solution to the environmental complexities of implementing and supporting external DSLs are projectional language workbenches. These are the set of emerging language creation environments, whereby the user’s actions directly manipulate the abstract syntax tree (AST). (Berger, T., Völter, M., 2016) Therefore, they can avoid the need to utilize parsers to build an AST from a concrete syntax. Previous research (Voelter, M., 2014) (Voelter, M. and Lisson, S., 2014) has shown that modern projectionally edited language workbenches, such as JetBrains MPS, are a promising tool for expressive language creation.

1) Banker, 1987, December. Factors Affecting Software Maintenance Productivity: an Exploratory Studyl. In ICIS (p. 27).
2) Van Deursen, A., Klint, P. and Visser, J., 2000. Domain-specific languages: An annotated bibliography. ACM Sigplan Notices, 35(6), pp.26-36.
3) Fowler, M. and Parsons, R. (2011). Domain-specific languages. Boston, Mass.: Addison-Wesley.
4) Berger, T., Völter, M., 2016, November. Efficiency of projectional editing: A controlled experiment. In Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering (pp. 763-774). ACM.
5) Voelter, M., 2014, September. Towards user-friendly projectional editors. In International Conference on Software Language Engineering (pp. 41-61). Springer, Cham.
6) Voelter, M. and Lisson, S., 2014, September. Supporting Diverse Notations in MPS’Projectional Editor. In GEMOC@ MoDELS (pp. 7-16).

Poster Download: Reducing environmental complexity.pdf

Summer 2015 Update: Scala, DSL’s and the BBC Freedom Festival


My aim for this post is for it be an informal update on what I have been doing this summer. So far I’ve been working for the Hull Computer Science department researching the Scala programming language with internal domain specific languages (DSL’s). So far this area of research has been incredibly rewarding. Scala is a fascinating language that offers a huge amount of features to explore, especially in the functional side of the language. I find this intriguing, coming from more of a procedural language background, as it is a new way to approach and solve problems. I’ve dramatically improved my programming expertise, bringing techniques and styles from a functional paradigm that I could incorporate into my existing projects. I’d strongly recommend all programmers to learn different paradigms, as it can change the way you think and therefor approach code.

Throughout this summer I’ve been experimenting will Scala and some of the small projects that I have created as a result of my research includes producing a predictive grade calculator. This program accepts what modules you are taking, weightings and currently known grades to predict what you may get and what you have to get in pieces of work to achieve your target grade.  Furthermore, I wrote a C Syntax Highlighting IDE as well as a book ordering system that uses a DSL front end to order large batches of books. The aim of these small projects was to test specific features of Scala and how they may be used in potential DSLs. I then progressed to work on converting an existing Java based model simulation framework to Scala in order to explore the conciseness and expressiveness of the language as compared with Java. Furthermore using an existing Java framework allowed me to test the interoperability of numerous Scala and Java classes. This work is still ongoing and I am currently building and testing different DSL design patterns using Scala to create a DSL front end for the framework. Some of the patterns I am researching include nested functions, literal extensions though Scala Implicits, closures, method chaining and function sequences. So far this work has been extremely fun and rewarding, and I’m really enthusiastic to keep researching  in this area.

BBC Make It Digital

BBC Make It Digital Freedom Festival Hull – September 2015

More recently I have taken part in the BBC Freedom Festival at Hull, this was a festival promoting programming for children. I represent my university and Computer Science department at the event. We were housed in the main BBC booth where we’d each give demonstrations of numerous virtual reality environments to the public using the Oculus Rift’s DK2’s. All the environments we were demonstrating were made by the Computer Science department of Hull University. The research being conducted through the use of the virtual environments was to train people to operate safely in a potentially dangerous environments, such as scaling a wind turbine in the ocean or operating heavy machinery. These modern virtual reality headsets offer an immersive experience through stereoscopic 3D and motion sensing using accelerometers and gyroscopes. This provides a unique opportunity to explore its uses in a visual education context. During the two day event, 17, 268 people turned up and we’ve were told after the event that our demos were one of the most popular at the festival. This was an amazing experience working along side the BBC and my colleagues, as well as getting to show and explain to people what exciting things my department has been doing. Overall I had a fantastic time.



Basic Introduction to Scala Part 1

Scala’s name comes from the idea of a scalable language, its ambition is to be a language that can scale to solve a more diverse set of programming problems more easily than General purpose programs (GPL). Scala is similar to a scripting language and supports full functional programming. Through compiler inference, Scala’s syntax can often be very concise. These attributes of Scala evolved from criticisms of General Purpose programming languages just as Java.

Preliminary work on Scala was undertaken by Martin Odersky in 2001 at École Polytechnique Fédérale de Lausanne. Martin Odersky is responsible for designing Scala and Generic java and he currently serves as chairman and chief architect.Martins aim was to incorporate functional and object-orientated programming but without the normal constraints of standard GPLs such as Java. Scala evolved from a research endeavor to develop better language support for component software. It utilized ideas from his previous research on a minimalist research language based on functional nets called Funnel. Before proceeding I should say that Functional nets are out the scope of this post, but I may write a post specific about them at a later date. The basic idea is that functional nets combine fundamental ideas from Petri nets and functional programming to produce a general programming notation. Funnel was a good idea but it proved to be too complex for non-expert users. Scala emerged from ideas of Funnel and an aspiration to make it interoperable with other standard systems.

Scala is designed to interact with standard languages such as Java and C#. It utilizes a large amount of the syntax and type systems from these languages but with some fundamental changes to work around some of the restrictions of those languages. This includes Scala not being a superset of java, excluding and re-implementing some of the features to aid in the goal of improved uniformity of objects. In addition, Scala is object orientated and shares most of the basic operators, data types, and control structures with standard GPLs.

Scala source code is designed to be compiled to Java bytecode. The implication of this is that Scala can run on a diverse set of systems and be extremely portable. Furthermore there is no performance penalty from the bytecode compiled from Scala as opposed to java, although peak performance of bytecode can be significantly less than native C/C++ compiled programs. Scala’s ability to compile straight to java bytecode allows for Scala and Java to utilize each other’s plethora of libraries. This has the effect of achieving higher language interoperability, which is one of the main features of the language.

Scala bytecode

Scala isn’t a traditional object oriented language or a functional language, it’s something quite new. Object oriented programming was invented in the 1960’s, and it has become the mainstream paradigm since the 1980’s. This was partly due to the growing importance of GUI’s in the 1980’s, which object oriented programming suited aptly. But in today’s world, other important features such as good techniques for concurrent programming are becoming more prominent. Therefore even though the idea of functional programming is old, it’s starting to break out into the limelight. Before proceeding, it’s important to note that functional programming is based strongly on functions in the mathematical context. For example in mathematics functions are pure, such that no global state of any kind will be modified internally by it. In addition, the function will always return the same result given the same arguments. Furthermore, variables in functional programming they are immutable, as opposed to procedural programming where they are mutable. Consequently, this gives functional programming a large benefit over object orientated programming when it comes to concurrency. To elaborate, in object orientated a lot of issues arise in synchronizing access to shared and mutable states. Due to functional languages not allowing mutable variables, this becomes much less of an issue. Although, Scala is not just a functional language but more of a hybrid between functional and object orientated. As such, Scala does not require variables to be imputable or functions to be pure, but it is strongly encouraged to write your code in a functional style.

Scala is a very versatile language, aiming to be concise. It can achieve this through type inference as well as cutting down on the boilerplate code that plagues Java. One of the ways it achieves this is though optional use of semi-colons. Although, semi-colons are normally not used. A good example of how concise Scala can be when compared to Java is a class constructor.

Scala vs Java v2

The syntax of the class constructor for Scala is significantly more concise than the Java implementation. When the Scala compiler comes across this code, it will output a class that has two private instance variables and assign the name and datatype to what is described in the brackets. Therefore, the Scala syntax allows the code to be faster to write, as well as being less likely to contain errors due to size of the code.

In conclusion, the aim of this post was to explain what Scala is and why is differs from traditional programming languages. My next post will be expanding upon this, and move on to programming in Scala.

Article sources:

[1] –
[2] –
[3] – Odersky, Martin. “Functional nets.” Programming Languages and Systems. Springer Berlin Heidelberg, 2000. 1-25.
[4] –
[5] – Odersky, Martin, Lex Spoon, and Bill Venners. Programming in scala. Artima Inc, 2008.
[6] –
[7] – Odersky, Martin, et al. An overview of the Scala programming language. No. LAMP-REPORT-2004-006. 2004.
[8] –
[9] –
[10] – Wampler, Dean, and Alex Payne. Programming Scala: Scalability= Functional Programming+ Objects. ” O’Reilly Media, Inc.”, 2009.
[11] –
[12] –

Basic Introduction to Domain Specific Languages

The way most people program today is through general purpose languages (GPL) such as c++, java etc. These are good for making programs that span across multiple domains or problem spaces, although they lack specific features to allow them to be suited for small tasks. This is where domain-specific languages (DSL or sometimes called application-oriented languages) come in. DSL can be programing languages or a specification language that is executable though notation tailored for a very specific domain. They are based on the concepts and features of that domain, as a result they give up generality for expressiveness for there target area. Examples of DSL’s in use today are HTML, LaTek, SQL etc.

Advantages of DSL’s over GPL’s are that they allow very concise code, which expresses its purpose and the domains idioms clearly. Furthermore, they allows faster programing for the domain the DSL is designed for, as a lot of the complexity of programing is abstracted away. For example with an Internal DSL, a library could be used to abstract the complexity of algorithms from the user. As DSL’s reduce the amount of programing and domain expertises needed, less experienced programmers are able to utilize the algorithms without having to understand their implementation in a GPL. In addition the most important advantage is that they incorporate domain knowledge. This means that the language supports the domain concepts and notation.

Disadvantages of DSL’s include the creation, maintenance and support of the tools supporting the DSL and as well as the language itself. Furthermore the users must be trained in it use, all of which can be expensive in time and money. Another drawback is the limited availability of resources and support. For example GPLs have large user base, meaning there are large ecosystems of tools build around them. Due to the specialist domain the DSLs are create for, the user base will be small. This means potentially less support is available for the DSL. Furthermore the tools supporting it will often be proprietary and limited. Also issues may arise from a proliferation of non-standard DSLs, such that skills learned for one DSL are made useless by going to work for a different company in the same domain that uses an alternative DSL. Furthermore issues arise from the focus on full code generation; it is probable that there will be loss of efficiency with the generated code compared to if the code was hand written. Although this is an issue, it will probably only affect a small number of use cases.

DSL’s are not a new concept, for example APT, a DSL for programming numerically controlled machines was developed in 1957-1958. But DSL’s are become more popular in recent times. There are two main types of DSLs used today: internal (Embedded DSL) and external.

Internal DSL often form API’s in a GPLs. Such that an internal DSL is part of and managed by a general purpose language.  These are often used in the Ruby and Lisp communities. These are becoming a popular way to be able to create a programing language, as they get around some of the complexities of an external DSL. A large problem with a DSL is creating the language itself, as you need to create and maintain an infrastructure and ecosystem to support it.  Such as a compiler or interpreter. Therefore having your DSL based on top of a GPL often negates these issues.

When designing an Internal DSL, a host language and it’s constructs can be used to implement the new language. This means that the new language has to be syntactically compatible with the GPL, as it will be complied by the same compiler or interpreter. This allows the DSL to be an extension or a reduction to the host GPL.

DSL’s can be an extension to a host GP such that the abstraction made through the DSL is available to the Host GPL. This means that the full GPL is made available to the user, therefore the Host GPL features don’t need to be implemented in the DSL. This approach does limit the syntax of the DSL as it has to be compromised to fit the the limited syntactic rules of the host language.

When the DSL is a reduction to the host GPL, the new language is specialized to the domain. Therefore, it may be important to hide parts of the host GPL constructs that are not relevant to the domain. This means that the DSL ends up being a filtered out section of the host GPL mixed with the new domain specific concepts.

My DSL diagram

Another architecture of DSLs are External DSLs. This is where the language is parsed independently of the host GPL. They essentially follow the typical compiler architecture. Furthermore the language is independent from the rest of the program (XML is often used), which often means more effort involved in programming it. They allow you to have a very custom syntax which contrasts with the internal DSL. This often means the end user will find that it will be easier to write, as the language can be very tailored to the idioms of the domain. For the external DSL, a full parser will need to be written. These can be implemented through code interpretation or code generation. Using an interpreter is often easier but code generation may be the only option, for example when runtime performance is important. The code generated is often in a high level language. Examples of External DSLs used today are regular expressions, SQL, XML.

Internal vs External

Finally there are also language workbenches. This is a specialized IDE for building and defining DSLs. They allow you to define the abstract syntax and structure of a DSL; an editor to allow people to write DSL scripts and a generator which translates the DSL to an executable representation.

This is just a basic introduction to DSL’s, I’ll be exploring this subject much more in depth in the near future. DSL’s are a fuzzy concept, as what you class as a DSL can be extremely broad. In the next section I’ll explore what makes a DSL compared to a framework with a normal command-query API. Furthermore I’ll be exploring more DSL concepts such as Semantic Models of DSL’s and running through some examples.

Article sources:

[1] –

[2] – Fowler, Martin. Domain-specific languages. Pearson Education, 2010.

[3] – Van Deursen, Arie, and Paul Klint. “Domain-specific language design requires feature descriptions.” CIT. Journal of computing and information technology10.1 (2002): 1-17.

[4] – Data-parallel Structural Optimisation in Agent-based Modelling Alwyn V. Husselmann May 2014

[5] –

[6] – Van Deursen, Arie, Paul Klint, and Joost Visser. “Domain-Specific Languages: An Annotated Bibliography.” Sigplan Notices 35.6 (2000): 26-36.

[7] – Mernik, Marjan, Jan Heering, and Anthony M. Sloane. “When and how to develop domain-specific languages.” ACM computing surveys (CSUR) 37.4 (2005): 316-344.

[8] –

This week, research and the future

It’s been awhile since I’ve written anything for this site. A lot has changed since then, which has caused my outlook for this site to change as well. From now on I will be writing research based brief summaries of topics in the computer science research field. More specifically, subjects I have been studying outside my degree studies. I’m really enthusiastic about this and I am aiming to write a weekly post exploring a subject, as well as small one off general software engineering or technology posts. I love working in the research field and trying to push the frontiers of science forward and I hope this site will help reflect that enterprise.

I am studding a bachelor degree in Computer Science, as well as working as an assistant researcher for the Computer Science department at Hull University in the UK. It is an amazing department, filled with exceptional leadership, researchers and lectures. From what started as a 6 week summer internship researching mobile technology, especially related to android development, has developed into an extended research assistant internship for my full second year of my degree. It’s a magnificent place to work and I feel exceptionally lucky to be working alongside incredibly intelligent and talented people. Sometime I can’t believe that I get paid to do something that I really enjoy.

This week I have been a part of Hull Universities Science Fair. This involved presenting various pieces of equipment to school children and parents from 7am-4pm over the course of two days. I presented custom demo’s for the Oculus rift that the research department had made. Their aim was to research teaching people how to operate in a dangerous environment through virtual reality. Furthermore, I also presented the Lego mind-storm robots. These are quite cool pieces of equipment. Even though they are a children’s toy, they are fully programmable. This allows us, as software engineers, to rapidly prototype robots for different domains, and focus most of our time on the software that runs them. The research group (CSRG) I am a part of ended up making a fully programmable Turing machine out of the Lego mind-storm. Which I think is amazing.

Hull Uni Science Fair 2015

Me presenting at Hull Uni Science Fair 2015

First Blog and the Snapdragon 800 with a custom kernel (Really Old Post)

I often bore people to death. Mainly when I, from others point of view, waffle on about technology and science… constantly. I go in to far to much detail in those areas than any ‘normal’ person should in conversation, and that is what makes me a geek. So I thought I best try to find a way to voice my thoughts on the world of amazing technology. I’ve decided to explore the world of blogs.


Today I’ve been messing around with flashing a custom kernel for my galaxy note 3 (snapdragon 800 variant) made by Imoseyon on XDA Devlopers ( It has a custom CPU core voltage interface, display controller changeable parameters and CPU overclocking to 2.72ghz among many other features. Some quick history about me with mobile devices, I’ve had 6 android smartphones and 4 android tablets since 2010. I have rooted and experimented with custom roms, kernels. When I have more time, I even make my own custom kernels. I do this for fun. Sounds weird, I know but it’s quite fun to do research on a devices SoC (System on Chip). Then to push my SoC to the limits and explore the vast plethora of Git’s (github software) housing the numerous source codes for custom kernels. It’s just fun experimenting. To discover what voltages and clock speeds work best for your specific SoC, and doing tests for heat output and performance gain vs increase in clock speed. I’ll talk a lot more about what a lot of the terminology means in this blog post in future blogs, as they are too big of subjects to condense as part of this.

Just a little info on the snapdragon 800:
– up to 2.36 GHz
– ARMv7
– LPDDR3 memory
– 4 KiB + 4 KiB L0 cache
– 16 KiB + 16 KiB L1 cache
– 2 MiB L2 cache
– 4K × 2K UHD video capture and playback
– Up to 21 Megapixel, stereoscopic 3D dual image – signal processor
– USB 2.0 and 3.0
– 28 nm HPm semiconductor process
– Display Controller: MDP 5, 2 RGB, 2 VIG, 2 DMA, 4k
– there are free Linux drivers Qualcomm’s Adreno GPU
– there are free Linux drivers for the Qualcomm Atheros WNICs
– LLVM supports the Qualcomm Hexagon DSP

Why even overclock the note 3, isn’t 2.3ghz on four cores enough? Especially when android is going to switch it’s runtime to ART instead of dalvik which should dramaticly increase android efficiency. (ART is something I’ll be going into detail in the future) It’s a good point, going back only 4 years android was running on the 1st 1ghz single core ARM cortex A8 SoC’s like Samsung Hummingbird or the Qualcomm Scorpion S1. That seemed insane to have a 1ghz chip…in a phone. But over time software became bigger and more advanced, and so did the need for chips to become faster to power that software. Today 2.3ghz is overkill for a phone. Not to mention it has four of those crazy 2.3ghz krait400 cores, I say again…in a phone. (*cough Phablet cough*) In addition, I have underclocked, for an experiment, it to 1.2ghz on all cores. The perceivable performance difference is almost unnoticeable. That is, it’s unnoticeable atm, but the snapdragon 800 is a brand new chip on the block and has to power a phone for 2 years for peoples contracts. So it’s overpowered now, but ask me again in 2 years.  Which is where overclocking comes in.

Over-clocking doesn’t just sound awesome, it’s useful too. I’ll go more in depth later about overclocking but the basics idea of it is the process of making a computer or component operate faster than the clock frequency specified by the manufacturer, and generally involves operating at higher voltages to do so. It’s extremely important to note that overclocking, even when operating at the same voltages, will produce a lot more heat and over-volting will increase heat as well. On a passively cooled device such as a phone this is important. Furthermore your pushing the devices speed beyond what it was meant to, so it might become unstable and crash or even damage the hardware it self. It’s like when you run, you know how fast you can run but you if try, you can probably run faster, although you’ll get tired a lot quicker and you might loose balance and trip up , which doesn’t really end very well.

How’d the note do? My note 3 did pretty well. In my testing I loaded up the kernel and went straight into the voltage interface and lowered the voltages for the stock frequencies as well as the over-clocked ones, to lower heat output. Nearly all SoC’s have a little voltage overhead to ensure stability of the chip. I just removed that to run at it’s lowest stable voltages. It’s a lot more complicated than that, but the subject is too large for this post and deserves a post to itself. I lowered the lower frequencies by 25mV and then lowered the higher ones by setting a new voltage then running a stress test app which puts the CPU cores on full load until I found the voltage at which it crashed. It turns out my SoC has a overhead of about 75mV on the higher frequencies, so I could under-volt 2.3ghz from 1025mV to 950mV which will improve battery a tiny bit but more importantly it will have a reasonable positive effect in lowing the temperature of the CPU when running full load at that speed. I was also able to lower the lower CPU speeds such as 300mhz by another 25mV. I couldn’t lower any of the middle frequencies like 1ghz further before it crashed. Anyhow, I then moved on to 2.5 GHz. I managed to get it to run at 1025mv, the same as stock 2.3ghz stable. This isn’t that surprising, as the chip is pretty much just a lower binned version of the 2.5ghz s801 found in the new galaxy s5. I then moved into the dark side. As with great clock speeds come great responsibility. I slowly jumped to 2.72ghz as my heart skipped a beat. Then it happened…


It was perfectly stable at 2.72ghz. I was surprised but it was also able to run 2.72ghz at 1100mv which is a perfectly safe voltage for the chip. Not only that, heat output wasn’t too bad. I’ll explain a lot about active vs passive cooling in mobile devices in a future post. But with insane clock speed with a low voltage and reasonable heat output is all thanks to the brand spanking new 28nm HP fabrication for the chip and the fantastic micro-architecture called Krait 400. This is Qualcomm custom micro-architecture running the arm v7 ISA. It will take a few days of messing about with the overclocking speeds to fully test the performance gain and I’ll have a dedicated blog post later on detailing how my dear old note 3’s SoC is fairing. Furthermore, I’ll be making a few blog posts exploring more of the terminology I’ve used in this post.