Can you code in Java? There are many who wouldn’t hesitate to nod in reply to this question, yet when they have to prepare for something like a sales automation system, they get stuck...
September 24, 2019
The reason is that although they know exactly what class or inheritance are, when they listen to the domain expert explain what the software should do, they don’t exactly understand them. In order for the domain expert and the developer to understand each other they need a shared domain-specific language (DSL), through which they can communicate the system-based needs with each other.
DSLs are necessary for business and software developers to collectively produce a functional IT system. They are languages that allow analysts less well-versed in IT to describe their own thought processes that address their demands related to the software to be developed. Compared to more general-purpose languages, such as Python or Java, a DSL can describe a far narrower world, conceptualizing far few things. However, it can do this in a far more precise manner. DSLs are also known as modelling languages; they allow us to produce source code from our IT system if we are capable of describing our conceptual system in the DSL.
Mona Lisa, the model
A model is nothing more than a simplified abstraction of the world.
A model’s language is referred to as a metamodel, whilst the language used to conceptualize the model’s language is known as a meta-metamodel. In order to create DSL metamodels, we need an environment that can be described with a DSL language. This in turn requires a level of abstraction or linguistic form in which a DSL can be conceptualized.
Let’s look at an example. In order to describe a musical track, we require a suitable language which can include a world of musical notes and a modelling environment where we can conceptualize what the description of a piece of music looks like. Once this is accomplished, the musical track can be played in the real world.
Things become more complicated when it comes to paintings as it isn’t easy to determine what kind of meta-language is suitable for describing a work of art. Nevertheless, AI is now capable of describing on the meta-level, for example, what a medieval painting looks like and is even capable of displaying it. This is true to the extent that last year, they auctioned off a painting created with artificial intelligence for $430,000 that used the Mona Lisa as its metamodel. Even though we don’t know what the painting’s metamodel was like, the AI was capable of uploading it and using it to regenerate a painting.
The general metamodelling environment consists of four levels. The lowest level is M0, it consists of real things, such as a musical track. Level M1 is the primary image of the model, ie the software. On level M2, we describe the features of the software listed on level M1, therefore this is the level of metamodels. Finally, M3 is the world of meta-metamodels: the linguistic level which allows the conceptualization of models.
Modelling space is an architecture defined by a certain meta-metamodel. Its highest level M3 features Meta Object Facility (MOF), the UML standard layer where modelling languages are described. The UML* is based on M2, where we can upload our own model that will include the entities we would like to represent while running the application.
Let’s take a look at some modelling spaces.
These are parallel yet meta-level descriptors. Here (see slide 12 of the presentation), on level M3, the MOF level, we find the MOF library, with the EBNF on the right: this is capable of calling up the metamodel in which it conceptualizes the JAVA program through where it can describe what our various entities look like. On the other side, we find the RDF with a similar structure.
In order to describe these metamodels, use them, and create low-coding platforms, we must provide transitions between the meta-levels. These transitions allow us to create new models with the help of model conversions.
Model transformation is an automated mode of model creation and modification, as a result errors will be reduced, so we are able to save resources. It deals with how a transition between two metamodels can be established by having one describing the problem area and the other related to a specific IT realization in JAVA.
Why is UML suitable for modelling in general?
First, let’s talk about semantics. Semantics describe the operation mode in which a computer follows the program to be executed in a specific language. In order to conceptualize our thought processes, we must provide semantic content for the UML. However, UML is not a series of elements to be executed as its semantics are not concerned with execution, but rather modelling. This means that UML links abstractions and various specification techniques, and if we’d like to use it to describe a functional system then we must work with a special, extended version of UML known as FUML**.
The development platform is a level of abstraction that seeks to convey a sort of DSL, a metamodel to the end user and as such, it attempts to limit its use to things such as planning data bases, planning business processes or designing user interfaces or other web applications. Low coding platforms have been used in IT for 15-20 years, before Forrester returned to the subject in 2014. Previously, this was known as MDA*** or a similar modelling concept and now there are many who deal with such platforms, the biggest players include OutSysems, Appian and Salesforce.
As they work with limited DSLs and limited metamodel systems, the types of applications we model can be created much faster.
Two things are definitely needed to produce a low code platform: business domain and architecture.
A low coding platform formulates the business domain for the architecture, whilst transplanting the model-based elements determined by the business domain into the architecture. This way, the architecture will be capable of running the application.
In order to create a modelling platform, we might need more technical space, such as UML, XML****, Java or RDF*****. Once a company selects a technical space, it can be used to house a functional low coding platform. The second version of the low coding platform we defined is Ecore. It is the metaobject facility layer of the Eclipse world, the description of the Ecore layer on the M3 level that is suitable for defining languages. Epsilon is a metamodel transformation language which can be used to transform our various models and Eclipse’s Sirius is suitable for creating model editors.
How do we define a domain model? This primarily requires creating a business language where we can describe our concepts and their interrelations. These will be subsequently conceptualized in EMF/Ecore.
On the next level, we use a graphic modelling device to describe what the model itself will look like, which can be accomplished with Eclipse Sirius. In the representation, we use displayed elements, shapes, colors and fonts to conceptualize the model.
As the final step, we use model-driven tools to generate, validate, compare and transform these elements.
This is a section of an M3 level model, the level on which we can describe the model elements we have on the M2 level. We can have an EClass and all our classes can have Supertypes, attributes and references. A particular attribute has data types, whilst a reference can have values pointing to one another. Through the description of the Ecore metamodel, we can express the types of models we’d like to create.
At BlackBelt, our self-defined environment is known as JUDO. This is the digital business platform of our firm that serves enterprise purposes and connects three main modelling interfaces, including business modelling, human workflows and document composition. These are the fields through which we can create an ever-increasing number of application platforms with the use of our own descriptions.
We learned a great deal over the first three years of JUDO:
We worked in our own technical space – I would no longer recommend that for anyone. Our data model segment is a set of instruments that can be easily defined, even in Python, yet it’s very hard to describe the behavior of a metamodel. In order to do this, we need to work with prefabricated elements on Epsilon and Ecore which weren’t at our disposal in regard to our own technical spaces.
It is also challenging to describe the UI as there are many attempts of modelling, and we have a UI descriptor, yet its operation is non-exhaustive.
What is it that we’d like to achieve in the advanced, 2.0 version?
We have a core generator, a model interpreter and a domain model, and we’d like to install an environment running BPMN. The complete system will consist of a web-based modelling device which can be used to create an ever-increasing number of models. The actual modeled elements can be created and generated through the modeler interface after which they can be published to an optional operating environment.
Should you have any questions or comments related to modelling or if you would like to create a modelling environment within the company, I would recommend you look into the worlds of Ecore and Epsilon. Creating an environment is suitable for capturing our repetitive, frequently executed tasks in a metamodelled system, after which the system can be used again and again and delivered to our users, who will be capable of describing certain model-level operations in this meta-environment.
*UML: Unified Modeling Language
**FUML: Foundational Unified Modeling Language
***MDA: Model-Driven Architecture
****XML: Extensible Markup Language
*****RDF: Resource Description Framework
Give Judo a Try!
We have a slack community as well, where you will find up-to-date information and you can also contribute with others.
A ton of documentation and video tutorials are available for our members.
You get scores for your activity and you can take home some JUDO goodies!