Secure Next-gen Infrastructure

Hardened Security

Hardened Security consolidates mixed-criticality systems onto secure next-gen infrastructure. Hardened Security adds multiple layers of security protection for a defense-in-depth architecture. As well as being able to power the most demanding mission-critical workloads, mixing real-time operating systems RTOS with traditional virtualization workloads - all in one commercial off the shelf COTS system. Saving is space, weight, and power (SWaP).


What is it?

  • Partitioning and isolation of shared resources, including cache, cores, memory and devices in the virtualized environment.
  • Cryptographic ID support of attestation and encrypted communication, including in-line memory encryption.
  • Isolation techniques to create more runtime security domains within a trusted virtualization environment.
  • Hardware-enforced firewalling to separate sensitive data from untrusted workloads and provide more deterministic workload performance for multi-core Intel processors. 
  • Resistance to unauthorized modifications with advanced security controls to manage virtual machines within a trusted virtualization environment.
  • Hardware encrypted memory protection.

Hardened Security gives the ability to run mixed-criticality workloads of traditional virtualization (Windows & Linux) alongside real-time operating systems on the same platform. This capability extends security and enhanced QoS for all workloads including Containers and Kubernetes. Having a single platform that can deliver enhanced QoS to deliver proven deterministic performance is the optimal platform for running emerging edge workloads such as 5G, AI/ML, HPC, and MLS. 

Upcoming blog posts will cover this more in-depth, but here are some of the workload characteristics of 5G and AI/ML workloads;

5G:

A next-gen platform must be able to provide the features and capabilities required for open radio access networks (RAN). Requirements imposed by open RAN deployments are:  

Real-time kernels, timing and synchronization, hardware and network acceleration, CPU isolation, NUMA awareness single root I/O virtualization (SR-IOV), and virtual functions (VFs). LM HS provides the guaranteed determinism, performance, and flexibility for 5G and next G workloads. 


AI/ML: 

Inferencing at the edge is a stage in a Deep Learning workflow where a neural network tries to label data that it hasn’t seen before, for example, to identify an object or person. Latency is critical to edge inference, think about how latency would be catastrophic to a self-driving car. LM HS provides the guaranteed determinism edge inference workloads need and can support all of the devices a user would want to use from CPU, GPV, VPU, or FPGA. 




Don't miss these stories: