Let's Talk

Beyond the Public Cloud: Gaining a Competitive Edge with a Balanced IT Strategy

Over the last decade, the public cloud has delivered remarkable agility, scalability, and on‑demand economics. It’s no surprise that cloud adoption has become almost universal, with more than 96% of companies using some public cloud services and global cloud spending increasing every year. Yet the “cloud‑first” mantra masks an important reality: not every workload or database belongs in a public cloud environment. 

Cost overruns, data‑sovereignty concerns, and performance requirements are driving a re‑evaluation of where workloads should run.  This has resulted in 83% of enterprises deciding to repatriate at least some workloads back to private clouds or on‑prem infrastructure, as stated in a summary of Barclays CIO Survey

Data Intensity helps clients modernize, migrate, and optimize their environments — and sometimes the best advice is actually to stay on-prem, in a private cloud, or in a sovereign cloud environment. The key is to have a balanced, common-sense cloud migration strategy that reflects your current IT needs and scales as business evolves.

What to Keep On-Prem, or in a Private Cloud 

Cloud readiness isn’t one-size-fits-all. These four examples show that not everything needs to live entirely in the public cloud.

1. Legacy UNIX workloads and proprietary hardware

Many established enterprises still rely on legacy UNIX platforms, such as HP‑UX, IBM AIX, and Oracle Solaris. These operating systems have a decades‑long reputation for stability and performance in mission‑critical environments. 

However, they run on specialized hardware (RISC, IBM Power, or SPARC) and are deeply integrated with applications written for those platforms. Migrating them to the cloud often means a re‑platforming or full rewrite to Linux/Windows, which is both risky and costly.

This aging proprietary hardware is not directly supported by most public cloud platforms, so third‑party tools are needed to run them, adding extra complexity and new vendors to manage. 

Per IBM Power Systems technical guidance, workloads run in the cloud are actually less resilient than in a traditional data center. While a power server failure in the cloud triggers an automatic restart elsewhere, organizations still need to plan their own backup and high‑availability strategies. There are also network traffic and speed challenges when running IBM Power workloads across two different cloud providers—showing that moving proprietary, legacy workloads to the cloud is far from a simple “lift and shift.”

One approach is to refresh on‑premises infrastructure with modern IBM Power or SPARC servers, or to run these workloads in a specialized private cloud. Both options provide continuity, address unsupported hardware, and allow time for a phased migration to Linux and the public cloud. Data Intensity often sees clients take this path when the cost of rewriting applications outweighs immediate cloud benefits.

2. Latency-critical and real-time performance

Where your infrastructure lives profoundly influences application performance. An independent study compared cloud versus on‑premises architecture, revealing that:

  • • On‑premises deployments typically deliver predictable, low‑latency performance measured in microseconds or milliseconds because servers and users share the same local network. 

  • • Organizations can fine‑tune hardware—high‑end CPUs, GPUs, or faster storage—for a particular workload and avoid resource contention. 

  • • In contrast, cloud performance depends on network connectivity; latency and throughput are subject to distance, bandwidth, and multi‑tenancy variability. 

The same study cites an example of algorithmic trading applications that often keep servers co‑located with stock exchanges to achieve sub‑millisecond latency and avoid the delays of remote cloud regions. Manufacturing control systems also require real‑time responses that are best served by on‑premises or edge computing.

This point is echoed by industry analyses conducted in 2025 (Megaport; Forbes), highlighting that latency and data‑sovereignty concerns are reshaping cloud architecture decisions, particularly for sectors such as finance, healthcare, and energy. These findings reinforce that real‑time workloads are highly sensitive to physical distance between processing locations and end users; thus,  prompting organizations to bring such workloads back on‑prem for faster, more reliable processing. 

Proximity to data sources is another factor: processing data closer to where it is generated reduces latency and maximizes efficiency, and the rise of edge computing lets companies position compute resources near critical data flows. For workloads where milliseconds matter—high‑frequency trading, telemetry ingestion, or interactive diagnostic systems—on‑premises, private cloud, or edge deployments are often the superior option. They deliver deterministic performance and avoid the variability inherent in shared networks. 

Data Intensity works with clients to identify latency‑critical applications, design low‑latency on‑prem or edge architectures, and integrate them with cloud‑based analytics or reporting systems. For example, we worked with a financial services client who brought their trade execution engine back on-prem to achieve sub-millisecond latency, while leaving their customer analytics platform in the cloud for scalability.

3. Regulatory, data sovereignty, sovereign cloud, and IP control

Regulations governing data handling and privacy profoundly influence infrastructure choices. A detailed 2026 compliance analysis maps data residency requirements across industries, noting that finance, healthcare, insurance, and government sectors are bound by strict rules dictating where data is stored and who can access it. Laws relating to data sovereignty require certain data (e.g., financial records or patient health information) to reside in specific jurisdictions. When no local public‑cloud region exists, or when cross‑border data flows are restricted, organizations may have a legal obligation to keep systems on-site. Public cloud providers offer region‑specific services and certifications (ISO 27001, SOC 2, PCI‑DSS, HIPAA, etc.), and many businesses find compliance easier with the help of these platforms. 

However, the same compliance analysis notes that on‑prem infrastructure provides a satisfying level of control; e.g., organizations can implement custom restrictions, ensure nothing leaves the premises unintentionally, and design audit systems to their exact specifications. For instance, an insurance company may keep a customer database on‑prem to comply with privacy rules while performing cloud analytics on de‑identified data.

Data sovereignty is also fueling the trend toward cloud repatriation. Recent CIO surveys have found that around 83% of enterprises plan to shift workloads from the public cloud to private or on‑prem infrastructure, up from 43% in 2021. The same surveys note that some hyperscale providers cannot guarantee data stays within a jurisdiction during transfer, leading organizations to move data back to private or colocation facilities to maintain compliance. For sensitive intellectual property (IP) or classified data, maintaining physical and logical control becomes a non‑negotiable requirement. 

Data Intensity often helps clients interpret regulations, design secure private clouds or on-prem environments, and identify when a sovereign cloud is the right fit. Where regulations demand guaranteed data residency within a jurisdiction, we evaluate sovereign cloud platforms — purpose-built environments operated in-country by vetted local providers — as a distinct third path alongside public cloud, private cloud, and on-prem deployments.

4. Cost, vendor lock-in, and operational control

Cost is a major reason organizations may reconsider cloud deployments. According to Uptime Institute, cost is the number‑one concern for data center professionals. While cloud services promise lower upfront capital expenditure, they introduce new variables: cloud costs can accumulate rapidly, egress fees can be unpredictable, and fluctuating workloads may cause budgets to exceed projections. 

Cloud‑economics research consistently shows that a substantial share of enterprise cloud spending is wasted. Flexera’s 2025 State of the Cloud Report estimates that organizations lose 27–32% of their cloud budgets to idle or over‑provisioned resources. On‑prem infrastructure may offer better long‑term cost predictability, resource consolidation, and capacity utilization.

Vendor lock‑in is another significant concern. Relying heavily on a single cloud provider can create dependencies that limit flexibility and negotiating power. If pricing models change or services experience outages, organizations may have few alternatives. To mitigate lock‑in, many enterprises adopt multi‑cloud or hybrid strategies. Industry cloud‑architecture guidance — including CNCF and NIST — recommends designing systems to be cloud‑agnostic by abstracting workloads through containerization and orchestration. This approach preserves bargaining power and reduces dependence on proprietary services. Multi-cloud approaches let businesses diversify vendors and fine‑tune infrastructure for specific performance needs.

Operational control is yet another driver. Cloud environments, while scalable, operate within predefined service constraints. On‑premise solutions allow organizations to deploy specialized hardware, apply customized security measures, and optimize systems for exact performance benchmarks. They also enable fine‑grained data‑governance workflows and direct control over compliance audits. For companies with unique hardware configurations or highly regulated workflows, this level of control is invaluable.

Crafting a Balanced Blueprint 

Most workloads gain significant advantages from running in the cloud, where organizations can tap into virtually unlimited elasticity, scale on demand, and adopt new services at a pace that traditional environments cannot match. 

Still, the future of enterprise IT is not a binary choice between cloud and on-prem; most likely, it is multi-cloud. Therefore, each organization needs to strike its own balance among the available environments, creating a strategic blend. 

Guidance recommends evaluating your applications on a case‑by‑case basis to determine where a legacy workload is best placed. This means evaluating sensitivity, demand variability, and integration requirements. One practical blueprint is to use steady‑state, on‑prem infrastructure for core systems and burst to the cloud when demand spikes. A hybrid approach delivers resilience, cost efficiency, and the agility needed to support changing business priorities.

How Data Intensity Can Support

As multi‑cloud experts and leaders in Oracle Fusion Cloud Applications, Data Intensity has guided hundreds of enterprises through this decision process. Our role is not to promote a “public cloud only” agenda, but to partner with you to design the right mix of environments. We can help:

  • Assess workloads: Identify which applications benefit most from cloud scalability and which are better suited to on‑prem or private environments. For latency‑critical or highly regulated workloads, we advocate keeping them close to your users or data.

  • Plan phased migrations: For legacy UNIX systems, we often recommend refreshing on‑prem hardware or using specialized private clouds while developing a roadmap to modernize your applications for Linux and the public cloud.

  • Optimize cost and governance: Implement FinOps practices to prevent cost sprawl and design hybrid architectures that provide predictable spending.

  • Ensure compliance: Navigate data‑sovereignty laws, regulatory frameworks, and IP considerations. Where regulations demand guaranteed data residency, Data Intensity evaluates sovereign cloud platforms as a distinct option along with public cloud, private cloud, and on-premises. 

  • Leverage multi‑cloud solutions: Build a cloud‑agnostic solution using containerization and orchestration to avoid vendor lock‑in and increase resilience.

At Data Intensity, we believe infrastructure decisions should be a living, breathing blueprint, not a mandate set in stone. By understanding your unique workloads, risk tolerance, and business goals, we can craft a hybrid or multi‑cloud strategy that maximizes agility while preserving control. In a world where the cloud is an essential tool but not the answer to every problem, having a trusted partner to navigate the complexity is invaluable.

Contributing Author:

Get in touch