Line 134: Line 134:
|Virtual Resource Repository
|Virtual Resource Repository
|This component keeps record of cloud edge sites and the configuration and availability of virtual resources in each one of them (for instance, number of k8s clusters available per site, CPU/memory available per k8s cluster, number of virtual CPUs that are available to setup new k8s clusters, ...) in order to help take decisions on workload placement.  
|This component keeps record of cloud edge sites and the configuration and availability of virtual resources in each one of them (for instance, number of k8s clusters available per site, CPU/memory available per k8s cluster, number of virtual CPUs that are available to setup new k8s clusters, ...) in order to help take decisions on workload placement.  
|-
|colspan="2"|Network Systems, SDN controllers
|-
|-
|rowspan="6"|Virtualization
|rowspan="6"|Virtualization

Revision as of 12:54, 22 October 2025

Why a Reference Architecture?

The content of this section has been created by the CISERO project, the original content can be accessed here: https://cisero-project.eu/ipcei-cis-reference-architecture

The digital future of Europe requires a cohesive and interoperable infrastructure, one that spans cloud and edge environments, integrates AI and data services, and ensures security, sustainability, and sovereignty across borders. To support this, the IPCEI-CIS Reference Architecture (ICRA) defines a common framework for designing, deploying, and operating cloud-edge systems in a federated, multi-provider landscape, representing a strategic instrument within the 8ra Initiative.

Layers

The content of this section has been created by the CISERO project, the original content can be accessed here: https://cisero-project.eu/ipcei-cis-reference-architecture

Application Layer
Application Designer This component enables developers to design, create and customize applications using intuitive interfaces or predefined templates. It facilitates rapid development, integration and delivery of tailored applications using automated CI/CD practices (DevOps). The Application Designer facilitates the description of the application in terms of: set of application components it is made of and how they are connected (service function chain), runtime environment each of the application components will require, including the set of functions/services to support its execution, the attributes that may allow the selection of the computing node to host it (hardware requirements, latency, privacy, etc. ).
Application Packager The Application Packager supports the packaging of applications for its deployment in the Cloud-Edge continuum. It facilitates the automation of application deployment and update (DevOps, both traditional and AI-assisted), providing an integrated toolkit that enables quick, secure and innovative ways to deploy cloud-aware applications. It also provides tools for automatic verification and validation (CV/CT) of the application and its supply chain before its final packaging.
API Gateway This component provides the interface to invoke and use the applications contained in the catalog. It checks the identity and authenticates the user and checks his authorization to use the application before providing access to it.
Application Monitoring It tracks application usage and execution, monitors the performance and identifies abnormal behavior and suboptimal use of resources.
Application Catalog It implements a directory of applications and functions that the providers have made available. They contain the characteristics of the application and the environment it requires for its execution (runtime, services, hardware characteristics).
Application Accounting and Billing This component implements the accounting of application usage and provides online charging information for the customer to track application expenditure in real-time.
Data Layer
Data Pipelines This component provides the functionality for data collection, including the connectors to integrate with the data sources and the capabilities for data curation and pre-processing that ensure its quality and readiness for analytics, insight generation, training, modelling or inferencing phases.
Data Modelling This component enables data cataloguing to enable exposure and discovery at scale to easily search, find and browse data, over a distributed environment.
Data Exposure The Data Exposure component provides customers with standard mechanisms and interfaces for safe and controlled access to data. It includes capabilities for making data offers and contract data acquisition, identity checking and data access authentication and authorization.
Data Policy Control Data Policy Control sets the required policies for data sharing, providing a safe, controlled and regulation-compliant environment for data exchange. It allows the data owner to manage the permissions to access its data: who can make it, at which conditions and for which purposes.
Data Catalog Data Catalog provides efficient storage and indexing of data to facilitate browsing, searching and finding data over a distributed environment.
Data Federation Data Federation enables standard mechanisms and interfaces (connectors) for partnering in the provision of datasets, providing a unified view of data catalogs and databases from multiple data providers. This component enables real-time data exchange across companies using data mesh principles, connecting distributed and heterogeneous actors over the cloud-edge continuum, keeping data owners in full control of their data. In order to create and maintain a coherent federated multi-provider Cloud Edge Continuum, Data Federation capabilities should be designed consistently with the other federation capabilities described in this document.
AI Layer
Cloud-Edge Training This component facilitates the dynamic and adjustable training of AI models across cloud and edge environments, ensuring scalability, reduced latency, and optimized resource utilization.
Cloud-Edge Inference The Inference components facilitate real-time deployment and execution of trained AI models on edge devices with efficient synchronization with the cloud for updates, monitoring, and enhancements.
Cloud-Edge Agent Manager The Cloud-Edge Agent Manager enables the deployment and management of agents and agentic workflows on edge and hybrid edge-cloud deployments creating an agentic mesh.
AI Model Catalog This component contains trained foundational models: LLMs, SLMs, multimodal LLMs, in multiple languages and managing multiple data types: text, images, video, code, etc. These models provide support for Natural Language Processing (NLP), Machine Translation (MT), speech processing, text analysis, information extraction, summarization or text and speech generation. They can be fine-tuned and adapted to specific use cases, using techniques like RAG, model quantization, pruning or distillation. The catalog contains multilingual and multimodal LLMs tailored to diverse EU languages, capable of understanding and processing diverse data types, including text, images, and multimedia. These models address the scarcity of generative AI solutions in non-English languages, ensuring semantic precision, completeness, and compliance with the AI Act.
Federated Learning AI workloads can be split across multiple nodes with central orchestration for scalability and efficiency (Distributed AI). AI Federation enables autonomous nodes to collaborate securely, ensuring privacy and sovereignty. Together, they balance task-sharing efficiency with autonomy. In distributed AI training, the AI model is generated at a central point based on the combination of models produced by different training agents distributed across a ecosystem of federated AI service providers or owners. The distributed training agents work locally on local datasets reducing the need to transfer data to a central location for training. This component allows to use and orchestrate AI resources across multiple providers to collaboratively perform a specific machine learning training task. It leverages a federated network of AI capabilities geographically distributed across the multi-provider Cloud Edge Continuum, enabling seamless resource sharing and scaling while maintaining sovereignty and compliance. It ensures efficient distribution of AI computational workloads, minimizes data movement, and facilitates parallel model training without requiring centralized data aggregation, thus preserving data privacy and autonomy while enhancing overall system performance. In order to create and maintain a coherent federated multi-provider Cloud Edge Continuum, Federated Learning capabilities should be designed consistently with the other federation capabilities described in this document.
AI Explainability This Explainable AI component ensures transparency by providing interpretable insights into AI decision-making processes. It supports compliance, accountability, and trust by enabling users and regulators to understand, audit, and validate AI models while respecting privacy and data sovereignty.
Service Orchestration
Service Orchestrator Service orchestration assures efficient tasks execution, load balancing and real-time operations. For example, it could communicate with the Multi-Cloud Orchestrator that manages the virtualized infrastructure layer offering a single unified environment for application development and monitoring. This allows applications and services to be deployed seamlessly across multiple platforms, optimizing resource allocation and reducing operational complexity. Alternatively, the Service Orchestrator may directly or indirectly interact with the underlying capabilities of the cloud platform or virtualization management layer to orchestrate workload execution. The Service Orchestrator automates application and tenant deployment, and lifecycle management processes. By automating workflows (or service function chains), orchestration ensures that services communicate efficiently across the cloud-edge continuum.
Application Performance Management It monitors the performance and resource consumption of the application or service and communicates deviations from set thresholds or SLAs to the Service Orchestrator for this to take actions to recover a state that meets application requirements. It provides a unified view of states, including logging, monitoring, and alerting, for effective real-time application management and validation at runtime.
Application Repository This component tracks the applications and services that have been deployed and their configuration, the locations where the application and service components are installed and the resources they are consuming.
Service Federation This component interconnects the Service Orchestrator with those of other federated providers, enabling the deployment and execution of applications (service function chains) across multiple providers in a seamless way, interacting with a single provider. In order to create and maintain a coherent federated multi-provider Cloud Edge Continuum, Service Federation capabilities should be designed consistently with the other federation capabilities described in this document.
Cloud Edge Platform
Multi-Cloud Orchestrator (PaaS) Multi-Cloud Orchestrator delivers a Platform as a Service (PaaS) service. A PaaS provides a complete application development and deployment environment in the cloud. With PaaS, customers can build, test, deploy, manage, and update applications quickly and efficiently, without worrying about the underlying infrastructure. It receives from the Service Orchestrator a request to deploy (or manage the lifecycle of) a certain application together with a descriptor (resource model) that defines the state the application needs for its execution (including runtime environment, services, data, application image and other attributes like area of service, performance...). MCO processes the state and takes actions to set it up and preserve it, by updating, upgrading or removing workloads and services, or rescaling or releasing resources. MCO works in close relationship with other components (PIM, VIP, MCM, Serverless orchestrator) to provide the virtual runtime environment defined for the application, the specific combination of bare metal, virtual machine, containers, serverless mechanisms it has been developed to run on, using the technologies over which it has been tested and certified. MCO also deploys and manages the lifecycle of essential tools and services such as middleware, development frameworks, databases, and business analytics, enabling organizations to streamline application development and drive innovation. A PaaS, managed by the MCO, offers scalability, high availability, and reduced time-to-market, allowing developers to focus on coding and application functionality while the MCO supports with infrastructure, security, and operational aspects. Based on certain attributes, like area of service and performance, the MCO may select the location(s) where to deploy the workload and the resources (physical and virtual) required at those location(s) to meet the desired state. This decision on application placement can also follow sustainability and privacy requirements. The MCO deploys the workload once the necessary resources are available, using the Workload Deployment Manager. The MCO also updates and removes workloads, rescaling or releasing the corresponding resources. This MCO description shows a decomposition of the functionality of a cloud-edge continuum workload management solution that may be implemented in many ways, combining or excluding some of its components in order to fit specific sector needs.
Cloud Edge Connectivity Manager The Cloud Edge Connectivity Manager (CEC) implements and modifies the service function chain, or removes it, totally or partially, following the requests from a Service Orchestrator, to guarantee the connectivity between workloads that will enable the service delivery and the connectivity from the service user to the workloads implementing the service front-end. Connectivity is usually based on overlay and underlay components in each domain crossed by the traffic (e.g. WAN, data centers, etc.). The CEC manages the networking in the data center domain through the virtualization managers (VIM, CISM) or via specific NaaS interfaces. It manages the WAN connectivity using Cloud Networking services (via transport SDN Controllers) for the connection of different computing nodes. In addition, the CEC manages the complexity deriving from the need to ensure consistency between overlay and underlay networking solutions (for example adapting the networking between the data center fabric and the AAN connectivity).
Physical Infrastructure Manager The Physical Infrastructure Manager (PIM) monitors and manages a pool of physical resources (CPUs, storage, networking), and selects and prepares them (with the corresponding OS and necessary software) to allocate these resources to a virtual machine or container cluster. The PIM provides multiple physical infrastructure management functions, including physical resource provisioning and lifecycle management, physical resource inventory management or physical resource performance management.
Multi-Cluster Manager The Multi-Cluster Manager (MCM) creates and configures container clusters both over bare metal and over virtual machines after a request from the MCO, offering a single interface to manage infrastructure from multiple providers and with multiple K8s distributions. The MCM provides open connectors/APIs to interact with the resources and k8s distributions offered by different providers (private & public) for cluster creation, configuration and monitoring, and keeps track of their evolution. The MCM may create a K8s cluster on bare metal (cluster nodes are servers) or on virtualization stack (cluster nodes are VMs), interacting with PIM or VIP respectively.
Virtual Infrastructure Platform Manager The Virtual Infrastructure Platform Manager (VIP) creates virtual machine clusters across several locations using the resources allocated by the PIM. The VIP is required when the service component to be deployed is a Virtualized Application or a Containerized Application, which runs over container clusters that make use of VMs (virtual machines). This component works on infrastructure and technology from different providers, enabling the Cloud Edge continuum to run on a diverse set of different virtualization solutions (VIMs, CISMs or any other future virtualization technology).
Workload Deployment Manager The Workload Deployment Manager (WDM) deploys software package(s) on top of an existing cluster(s) following MCO requests. It exposes a single interface to deploy software packages (i.e. via a helm chart or resource model declaration) on any K8s cluster (or alike) based on any distribution. The WDM provides the connectors/APIs to interact with existing clusters in different locations & technologies (K8s distributions) for application deployment and lifecycle management. This component can also deploy software packages directly on virtual machines (IaaS).
Cloud Edge Federation This component interconnects the Multi-Cloud Orchestrator with the ones of other federated providers, enabling the customer the use of cloud edge computing services (IaaS, CaaS, PaaS, Serverless, NaaS...) across multiple providers in a seamless way, interacting with a single provider. This platform federation provides seamless integration and collaboration between multiple cloud platform providers, enabling interoperability, resource sharing, and unified lifecycle management. Shared resources may exist on all layers of cloud architecture. By adopting standardized protocols and interfaces, platform federation facilitates enhanced scalability, efficiency, and innovation across different cloud environments while maintaining autonomy and security for each participating entity. In order to create and maintain a coherent federated multi-provider Cloud Edge Continuum, Cloud Edge Federation capabilities should be designed consistently with the other federation capabilities described in this document.
Cloud Edge Access Control This component implements a key aspect in terms of security in the management of cloud edge infrastructure, a role-based access control that ensures proper access rights and security across the infrastructure.
Cloud Edge Resource Repository This component keeps a record of the resources available in each of the edge locations, the virtualization platforms available and the configuration. The information in this repository helps the multi-cloud orchestrator to select the right location(s) to deploy workloads.
Workload Inventory This component keeps record of the workloads that have been deployed and their configuration, as well as information about the location and cluster where they have been deployed and the resources they are consuming.
Serverless Orchestrator (FaaS) The Serverless Orchestrator provides Serverless capabilities, also known as Function as a Service (FaaS), is a cloud computing model that allows developers to build and deploy applications in the form of individual functions, which are executed in response to specific events or triggers. This model eliminates the need to manage server infrastructure, enabling developers to focus solely on writing code. Each function runs in a stateless container, automatically scaling with demand and only consuming resources when invoked, leading to cost savings and efficient resource utilization.
Virtualization
Hardware Resource Manager (BMaaS) The Hardware Resource Manager component delivers a Bare Metal as a Service (BMaaS) service. BMaaS is an abstraction that provides physical, non-virtualized hardware resources directly to users, offering dedicated servers, storage, and networking components without any virtualization layer. This service allows to harness the full power of the hardware for applications, resulting in higher performance, predictable latency, and complete control over the environment. BMaaS is particularly beneficial for workloads that require intensive computation, low-latency networking, or compliance with specific hardware configurations.
Virtual Infrastructure Manager (IaaS) The Virtual Infrastructure Manager (VIM) component provides Infrastructure as a Service (IaaS) service. IaaS is a cloud computing model that provides virtualized computing resources. IaaS delivers essential services such as virtual machines, storage, and networks. Users can provision, scale, and manage the resources dynamically according to their needs, while the cloud provider takes care of maintaining the underlying hardware, networking, and security. This model offers high flexibility, enabling organizations to quickly deploy and run applications and services, test new solutions, and handle varying workloads with ease, ultimately driving innovation and operational efficiency.
Container Infrastructure Service Manager (CaaS) The Container Infrastructure Service Manager (CISM) component provides a Container as a Service (CaaS) service. CaaS is a cloud service model that provides a platform allowing users to manage and deploy containerized applications and workloads. By leveraging container orchestration tools such as Kubernetes, CaaS facilitates the automation of container deployment, scaling, and operations, ensuring high availability and performance. This model abstracts the underlying infrastructure complexities, enabling developers and IT teams to focus on application and service development and deployment without worrying about the maintenance of the physical or virtual infrastructure.
Virtual Resource Access Control As in the Cloud Edge Platform layer, this Access Control component implements virtual infrastructure management security, a role-based access control that ensures proper access rights and security for virtual resource management.
Virtual Resource Repository This component keeps record of cloud edge sites and the configuration and availability of virtual resources in each one of them (for instance, number of k8s clusters available per site, CPU/memory available per k8s cluster, number of virtual CPUs that are available to setup new k8s clusters, ...) in order to help take decisions on workload placement.
Virtualization Compute Compute resources are fundamental to cloud infrastructure, delivering the computational power required for running applications and services. They facilitate scalable and efficient environments that dynamically adjust to varying workloads, thus enhancing resource utilization and performance while minimizing costs.
Storage Storage is essential in cloud infrastructure, providing data persistence, management, and accessibility. It includes block storage for databases, object storage for unstructured data, and file storage for shared access applications. Advanced technologies like SSDs and distributed file systems ensure scalability, reliability, and performance.
Networking Hardware networking resources in a cloud edge location include routers, switches, load balancers, and firewalls. These components form the backbone of data center connectivity and inter-server communication. Network Interface Cards (NICs) in servers enable high-throughput connections to the virtual network. WAN gateways and edge routers extend connectivity to external networks, supporting hybrid cloud and remote access scenarios. All hardware is managed centrally through SDN controllers and scaled dynamically to support edge cloud service demands.
Hardware Infrastructure Manager A Hardware Infrastructure Manager (also known as Data Center Infrastructure Management system, DCIM4) is a management component designed to monitor, measure, and manage the IT equipment and infrastructure within a cloud edge data center. It encompasses the following key aspects: Monitoring and Management: it provides real-time monitoring of data center operations, including power usage, cooling efficiency, and physical security. This helps in optimizing the performance and efficiency of the data center. Documentation and Planning: it maintains detailed documentation of the data center's physical and virtual assets. This includes layout planning, capacity management, and future expansion plans. Risk Management: By continuously monitoring environmental conditions and equipment status, it helps in identifying potential risks and mitigating them before they lead to failures. Integration with IT Systems: It integrates with other IT management systems to provide a holistic view of the data center's operations, facilitating better decision-making and resource allocation. Sustainability and Compliance: It supports sustainability goals by optimizing energy usage and ensuring compliance with industry standards and regulations.
Hardware Resource Repository This component keeps record of cloud edge locations and the configuration and availability of physical hardware resources in each one of them (for instance, number of servers per location, type of servers, type of NIC cards available per location, cost of resources, energy consumption of resources, ...) in order to help take decisions on workload placement and resource lifecycle management.

Mapping of Horizon Europe EUCEI RIAs on the Reference Architecture

The table below provides a schematic mapping of Research and Innovation Actions, as related to the IPCEI-CIS Reference Architecture components.

 ProgrammeEuroVoc IDIPCEI-CIS Reference Architecture high level
CODECOHorizon Europe/natural sciences/computer and information sciences/softwareAI Layer
Application layer
Cloud Edge Platform
Data layer
Management
Network Systems, SDN controllers
Physical Cloud Edge Resources
Security and compliance
Service orchestration
Sustainability
Virtualization
COGNIFOGHorizon Europe/natural sciences/computer and information sciences/internet/internet of thingsAI Layer
Application layer
Cloud Edge Platform
Data layer
Management
Network Systems, SDN controllers
Physical Cloud Edge Resources
Security and compliance
Service orchestration
Sustainability
Virtualization
EDGELESSHorizon Europe/natural sciences/computer and information sciences/internetAI Layer
Application layer
Cloud Edge Platform
Data layer
Management
Physical Cloud Edge Resources
Physical Network Resources
Security and compliance
Service orchestration
Sustainability
Virtualization
HYPER-AIHorizon Europe/natural sciences/computer and information sciences/internet/internet of thingsAI Layer
Application layer
Cloud Edge Platform
Data layer
Management
Physical Cloud Edge Resources
Physical Network Resources
Security and compliance
Service orchestration
Sustainability
Virtualization