Mendix Data Sets and Reporting revisited in 2019

When I first began using Mendix tools and creating Rapid App Dev projects, I was coming from a BI/Analytics career. I was very comfortable with SQL and immediately gravitated towards using it in Mendix as OQL Data Sets. I wrote a few other blogs on here (just search the tag “OQL”) and view some write-ups. Now we are up to v8.2 as of this writing and OQL/Java data source still exist as well as the reports. Do they still make sense?

I recently created a Proof of Concept (POC) for a customer where they asked for some basic aggregation reports. Because of the volume of projected data should this POC become a Pilot become a fully-fledged production app, I decided to write the reports using OQL datasets. Let’s explore a bit of my thinking:

The data volume is very large

Whenever you are building an app where the data volume is significant, performance can become an issue for any platform, Mendix included. After all, Mendix is built on Java, so what you see with Java and object-oriented design principles, good and bad, come with it. One decision point when querying large sets of data and performance is how large those sets are, and what you want to do with that large set of data. If the only thing you want to do is report on it (not select a record or list of records and “do” something from there), then an OQL query makes logical sense. Databases are designed to capture and retrieve data very efficiently, and writing a report in the db language instead of an interpreter will force the database to do all of the logic instead of the run-time. As a general rule, the more logic you can let the heavy lifting tools like databases do, the more efficient your design, with a lot of exceptions (client-side for browser and mobile can be even more efficient by removing network hops, etc). So if I’m going to be handling 100k’s or 1m’s of records, I’m going to look at processing that via a db query as opposed to a microflow.

The data is local to the database

OQL datasets query the application database. It isn’t a CRUD tool to interact with third-party databases. There are other modules for that (DB Connector for example). If you area storing a large amount of data in your app, chances are your reports will be well served with OQL reports.

Security at the row-level isn’t needed

This is a big one. When you create an OQL query, the module/project security is largely ignored/bypassed with the exception of what user roles are allowed to view and run the report. There is security for parameters of the dataset that you can set as yes/no to access them, but the row level security is not enforced. To confirm this, I created a simple project with a Customer and Order table, with Orders related to customers. In the Orders table, I setup a constraint on my ‘limitedUser’ so that they could only see Customers with a Department of ‘2’. Looks like this:

When look at a Data Grid with that security applied, I get the limit that I expect as follows:

But if I run my OQL query to count Orders by Customer and view it as ‘limitedUser’, I get all of the data:

So as you can see, the ‘limitedUser’ is seeing Jenn and Jim even though the security is setup to restrict that. It’s because the OQL doesn’t consider the module security to allow the data set author full control over what to show. Personally I wish there was an option to enforce row-level security, but that comes with the performance hit and at that point you might as well just use a microflow/datagrid with an XPath constraint enforced.


I have other blogs about the features, but something to point out is that ctrl+spacebar works in the OQL editor as well. For example, look at the code below:

	C.Name as NAME
	,count( as NumOfOrders
FROM MyFirstModule.Orders O
	join O/MyFirstModule.Orders_Customers/MyFirstModule.Customers C

After declaring my Orders table as alias “O”, I was able to join my Customers “C” table by typing the association context just like in XPath, and I used ctrl+spacebar to find the pieces. I typed O/, then it brought up a helper menu and I selected the association, the another / to get the object, and so on. Additionally, when I typed C.Name in the Group By, after typing C. the attributes appeared in a helper menu to choose from. That makes it a lot easier than formal SQL where you would first join to the two column table (CustomersOrders) and then to the related objects.

The OQL App Store module

The ability to quickly retrieve the data in an OQL data set has been outlined above and the things you need to be aware of. If you have long running jobs that loop through a large amount of records, using the OQL module greatly helps. Not only can you specify all of the parameters you need to through a standard process (e.g. strings for your WHERE clause), but you can set limits and offsets as well. The performance is night and day. I don’t have an example at my fingertips with a large dataset to prove it, but in our experiences we have used this many times to enhance the performance of the workflows.

What else should you know?

In version 8, it clearly states that you can’t use datasets and the report grid for React Native apps. That makes sense of course, but it is something to be aware of. There are other techniques for speeding up performance when working with a React Native app.

I know to some that working with Data Sets and OQL seems archaic, but I assure you it is anything but. For those that leverage it correctly, it helps us deliver fast and accurate results, with the caveats listed above, as well as treating it like an extension. Mendix developers aren’t necessarily SQL/OQL developers. If you create a project in Mendix, you have to account for the skills of the developers in your project teams. Just like it might make more sense to do some things in a Java Action or an OQL statement, it doesn’t mean you should if you are the only one that can support it! I hope this helps some of you take a second look at an often overlooked feature in Mendix as it does still make sense!

If you haven’t given Mendix a trial yet, check it out!