US startup unveils AI supercomputer OMNIA the size of a carry-on
Industrial Robotics·2 min read

ODINN's OMNIA: The Carry-On Supercomputer Transforming Localized AI Processing

By Maxine Shaw

Deployment documentation confirms imagine deploying an AI supercomputer in just minutes, seamlessly fitting into a conference room or hospital lab. This scenario has become a reality with ODINN's OMNIA, a groundbreaking device from a California startup that condenses the power of a data center into carry-on size.

As industries worldwide strive to harness AI's potential, the demand for localized, secure data processing has never been more urgent. With the launch of OMNIA at CES 2026, ODINN aims to alleviate the burdens of constructing expansive data centers, enabling organizations to operate high-performance AI systems on-site while ensuring data privacy and security. This innovative deployment model could transform how companies manage sensitive information across the defense, healthcare, and financial sectors.

The Need for Localized AI Processing

Every industry is racing to integrate AI capabilities into their operations, yet significant obstacles remain. Traditional cloud infrastructures often require sending sensitive data-such as healthcare records or financial transactions-to remote servers, raising privacy concerns for organizations that handle confidential information.

Introducing OMNIA: Specifications and Unique Features

In sectors like defense and healthcare, delays from lengthy data transfer times and the security risks of cloud-based solutions create a strong case for localized processing. As regulations become increasingly stringent, local supercomputing options can help mitigate these risks.

Infinity Cube: Scalability Through Modular Clusters

The OMNIA AI supercomputer challenges conventional size requirements for data centers with its compact design, allowing setup in minutes rather than months. With computing capabilities on par with traditional systems, OMNIA integrates CPUs, GPUs, and substantial memory into its portable form factor. This is achieved through innovative engineering, including a proprietary closed-loop cooling system that enables efficient operation without additional cooling infrastructure.

OMNIA also supports standard power and network connections, facilitating deployment in various settings without the need for specially prepared server rooms. It addresses today’s operational needs, where organizations prioritize not just performance but also the rapid deployment of AI capabilities.

Software-Focused Approach: NeuroEdge Integration

Infinity Cube: Scalability Through Modular Clusters

To address the processing power limitations of a single OMNIA unit, ODINN provides an expansion option known as the Infinity Cube. This modular design allows multiple OMNIA systems to be housed within a single structure, enhancing processing capabilities while simplifying installation. Each unit within the Cube can operate independently, enabling users to scale their AI operations without downtime as demand grows.

Constraints and tradeoffs

  • Limited data processing capacity compared to traditional data centers
  • Dependency on local power and network infrastructure for optimal performance
  • Initial learning curve for effective software integration

Verdict

ODINN's OMNIA represents a pragmatic leap in localized AI processing for industries requiring quick, secure access to powerful computational resources.

This innovation allows organizations to start small with a single unit and gradually expand their capacity as needed, eliminating complex upgrades or rebuilds and preserving operational continuity.

Key numbers