Food For Tech | Robust and Scalable data solution / Part 1
Author: Benito van Breugel
Are you on a data journey within your organization? Let’s make sure to focus on creating a scalable, flexible, and robust data platform for your organization. As we are living in 2020, there are hundreds of tools, packages and solutions available that enable an organization to use data effectively. Brainstorming about the best option for that moment, is not always the best choice for the long term. Moreover, as data is growing exponentially and technologies are evolving every year, it becomes more important to focus on scalability, while keeping your current needs and data domains available as well. The core business of Food For Analytics is to build scalable, flexible, and robust data platforms. As presented in our previous blog, we enable this, amongst other innovative components, via our meta driven tabular model. Let’s focus on this from a technical architecture perspective.
A data platform that enables a Data & Analytics solution should always have a central data hub, in business intelligence this is known as a data warehouse, while in the world of big data this is called a data-lake. In each situation, this central data hub, should have an architecture that is scalable and flexible while staying robust. Food For Analytics provides a central data hub, automatically generated based on your requirements, whilst adhering to these three key principles. These will be explained below, while in our next Food For Tech blog we will focus on the platform & application architecture side.
Our Food For Analytics-meta-layer allows us to generate and deliver a fully centralized data hub. This will accelerate your organization’s time 2 market significantly. Together with our meta driven tabular model, and defining the right business definitions, we create a data pipe from a data source to dashboard, fast and effectively. Having defined the scope and business definitions, an end-2-end data pipe can be operational within a day

Simply put, we’ve automated repetitive development tasks to the highest possible degree. From the data source-layer until the consumption layer and everything in between. The business flow depicted on the left highlights the data pipe steps. This will give your organization precious time back to focus on what really matters, delivering valuable business insights.
For demonstration purposes of this blog, we take the Microsoft Adventure Works 2017 database as a source, that can be downloaded here. Based on the sales order domain, a defined set of tables is selected for the demo. In the end, after analyzing the input, the following consumption model is automatically being developed.

In case you are already familiar with BIML you should know that it will significantly reduce your data pipe-development-cycle in a project. It can be used as an accelerator to create multiple data pipes according to a predefined flow. For the FFA adventure works demo, once the scope is defined and the required metadata is added to the FFA meta-layer, we only need to generate and run the data pipes.
This allows us to automatically generate a raw-layer, historical hub-layer, and consumption-layer that matches the model depicted in the figure above. In order to get a consumption-layer that serves the business, we have the ability to add additional business rules in the FFA meta-layer, which will be included during generation and deployment of the model. Whilst the historical layer follows an 6th normal form architecture, the consumption layer follows a dimensional model architecture.
The FFA-Meta-layer is the cornerstone of the FFA Automation Framework. It contains all meta data which is required to generate a complete data pipe end-2-end. The meta data comprises information on data sources, data objects, business logic and, above all, relationships to tie it all together.

Using this concept will drastically save time when new data domains, request and definitions need to be added to your data platform, as it always follows the same scalable, flexible and robust structure. Food For Analytics is able to use this concept both in a cloud or on-premise scenario. It can also help in obtaining a solid bases for any data science requirements. From a historical point of view, data warehousing on local SQL Server systems is sometimes known as a “traditional” BI solution. However, seeing the current possibilities, this is not always the best solution an organization should go for. Nowadays, there are many open source alternatives that work with multiple cloud vendors. Those can enable analytical capabilities, such as fraud detection, predictive analytics and operational performance management at scale. In order to be ready for this, a scalable data platform is required, which will be elaborated in our next Food For Tech Blog.
Interested in a full demo, feel free to contact us! Stay Tuned!