These challenges require new approaches to integrated water cycle management: the implementation of advanced metering infrastructures, the creation of digital twins, the use of geographic information systems and artificial intelligence are becoming increasingly common in the improvement of the water lifecycle and will certainly become indispensable in the future.
These new tools share a common problem: the need to retrieve and manage a large volume of data originating from totally heterogeneous systems and environments. Until now, each of these tools was responsible for managing its data autonomously, creating numerous information silos and making it impossible to obtain the maximum value from the data. Graphenus was born to solve this problem, providing a platform that allows the unification of all the data needs of these tools, defining data spaces that facilitate governance and ensure interoperability and total scalability:
- Graphenus allows the discovery and incorporation of information from any source: measurement systems, APIs, databases, etc.
- Data hosted in Graphenus can scale infinitely: there is no need to delete historical measurement data, which can easily be used in the creation of analytical and AI-based models.
- It has distributed processing capabilities to meet both real-time and batch needs.
- It incorporates end-to-end governance capabilities, allowing security and quality policies to be defined at the lowest level of detail.
- Graphenus enables the integrated creation of machine learning models on the data hosted in the system, facilitating their training, publication and updating.
- Graphenus is fully interoperable with other systems, thanks to GAIA-X compatibility. Graphenus allows data sharing with private companies or public entities in a totally secure and scalable way.
In addition, thanks to the native integration with Elliot Cloud and its solution Smart WaterThe new system, allows exponentially increasing the speed of development of advanced use cases for water management, enabling the detection of leaks and fraud; and the development of digital twins for supply networks, water treatment plants, valves, etc. It also facilitates proactive management of the quality of drinking and operational water for fleets and maintenance services integrated in our distribution network; it enables environmental impact assessments, as well as smart adduction.
Graphenus system integrated into Elliot Cloud platform
Example of functional structure of use cases and relationship to base architecture elements
Graphenus: data at the service of water resources
Data plays a crucial role in water management. Having a platform such as Graphenus will allow companies and public bodies to completely transform current management processes, improving efficiency and facilitating decision making. Graphenus provides a functional and technical architecture proposed to cover the needs specified by companies in the sector to create data lakes or shared data spaces at a very low cost, as it is not developed with licensed tools. The Graphenus solution model includes a set of tools for data capture, storage, processing, exploitation and consultation.
"Graphenus provides a platform for defining data spaces that facilitates governance and ensures full interoperability and scalability".
of large volumes of data. Integration in Elliot Cloud makes it possible to ingest data from different sources, store them in a reliable and fault-tolerant way, perform complex analytics on them, both in batch (batch processing) and streaming (real-time processing), ensure the persistence of a data model through the creation of databases and tables, or develop predictive and classification models on them, i.e. machine learning processes, for subsequent consultation and exploitation.
To meet these needs, the platform encompasses different services and/or tools that allow us to perform these tasks. Most of these tools have been built on containers (orchestrated with Docker Swarn) with their minimum and necessary components for their operation, in such a way that we have a modular architecture of the tools, easily deployable, scalable and versionable, as well as being tolerant to failures or crashes of the nodes in which they are deployed, thus assuming a high availability environment.
All these tools used are open source and widely used in the Big Data field, most of them belonging to the Apache project (which has a large, very active and collaborative community), which have been configured, customised and adapted to work together and integrated in a container environment across different nodes.