Close Menu
Xarkas BlogXarkas Blog
    What's Hot

    Scaling IAM for the AI Age With Dynamic, Zero-Trust Access

    October 25, 2025

    Weak Data Infrastructure Limits GenAI ROI

    October 25, 2025

    A comprehensive list of 2025 tech layoffs

    October 25, 2025
    Facebook X (Twitter) Instagram
    Xarkas BlogXarkas Blog
    • Tech News

      Scaling IAM for the AI Age With Dynamic, Zero-Trust Access

      October 25, 2025

      Weak Data Infrastructure Limits GenAI ROI

      October 25, 2025

      A comprehensive list of 2025 tech layoffs

      October 25, 2025

      Musk’s ad chief at X departs after just 10 months

      October 25, 2025

      The much-anticipated iPhone Air enters China — but enthusiasm is underwhelming

      October 25, 2025
    • Mobiles

      OPPO Reno 15, Reno 15 Pro Key Specifications Tipped Ahead of Launch: Here’s What to Expect

      October 25, 2025

      Nothing Phone 3a Lite Tipped to Feature Dimensity 7300 SoC and 33W Charging

      October 25, 2025

      Nothing Brings Android 16 to Phone (3a) Series with Nothing OS 4.0 Beta

      October 25, 2025

      Nothing OS 4.0 Just Got Controversial: Here’s Everything You Need to Know

      October 25, 2025

      Realme P4 Series Key Specifications Confirmed Ahead of Launch in India on August 20

      August 12, 2025
    • Gaming

      8 Games With The Most Satisfying Movement

      October 25, 2025

      Best Research Upgrades to Unlock First in Jurassic World Evolution 3

      October 25, 2025

      Hollow Knight: All Charm Notch Locations

      October 25, 2025

      Wednesday Addams x Fortnite Skin Price & Showcase

      October 25, 2025

      Battlefield 6 Season 1’s New Weapons Will Dominate the Next Meta

      October 25, 2025
    • SEO Tips
    • PC/ Laptops

      Nvidia RTX 5070 Founders Edition Review

      October 25, 2025

      Nvidia GeForce RTX 5060 Ti 16GB Review: A Future-Proof Mid-Range GPU That’s Worth the Upgrade

      October 25, 2025

      Apple’s New M5 Chip Is a Monster! 4X Faster AI Performance Leaves M4 in the Dust

      October 25, 2025

      Best gaming laptops for beginners, up to 40% off on Amazon Great Indian Festival Sale

      October 17, 2025

      64% faster video editing with Intel Ultra laptops for creative professionals: Top 8 picks for seamless multitasking

      October 17, 2025
    • EV

      The Solution To Supercharging EV Adoption Is Hiding In Plain Sight

      October 25, 2025

      The Nissan Ariya Is Dead. It Leaves Behind A Big EV Lesson

      October 25, 2025

      Forget Axles. This Startup Wants to Put the Entire EV Motor in the Wheel

      October 25, 2025

      Lightness, Not Power, Makes Audi’s Electric Sports Car Special

      October 25, 2025

      Can Rivian’s Also Hit Reset On The E-Bike World?

      October 24, 2025
    • Gadget
    • AI
    Facebook
    Xarkas BlogXarkas Blog
    Home - Featured - Weak Data Infrastructure Limits GenAI ROI
    Featured

    Weak Data Infrastructure Limits GenAI ROI

    KavishBy KavishOctober 25, 2025No Comments6 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn Tumblr WhatsApp Email
    Weak Data Infrastructure Limits GenAI ROI
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email


    Despite billions spent on generative AI, most enterprises still fail to see measurable ROI. A new analysis suggests the problem lies not in algorithms or ambition but in the hidden layer beneath — data infrastructure. Experts say storage, scalability, and performance bottlenecks are holding back enterprise AI from moving beyond pilot projects into profit-driving production.

    A recent Massachusetts Institute of Technology (MIT) study found that despite U.S. companies investing an estimated $30–40 billion into generative artificial intelligence (GenAI), 95% have seen no measurable return, with only 5% successfully deploying tools at scale. The issue is not infrastructure or talent but the technology itself.

    Current AI systems lack memory, adaptability, and the ability to integrate into mission-critical workflows, according to researchers. Numerous outlets reported the MIT study, but it is not available directly from MIT.

    A key finding revealed that a vast chasm exists between a small group of companies that extract millions of dollars in value from AI and the vast majority, which have zero measurable impact on their profit and loss statements. Four primary reasons support this claim.

    Table of Contents

    Toggle
    • Pitfalls That Derail GenAI Projects
    • Why Weak Storage Cripples AI ROI
    • Traditional Storage Risks Mission-Critical AI Workflows
    • Resolving the Toughest Data Challenges
    • Solving Old-Tech Issues

    Pitfalls That Derail GenAI Projects

    Most GenAI initiatives often fail at the pilot stage. Only 5% of over 300 public implementations successfully scale to production with a measurable impact. Flawed enterprise integration, not the quality of the AI models themselves, results from tools that fail to learn from feedback and are not well-integrated into daily operations.

    Another contributing factor to the low ROI results is that employees are using “shadow AI economy” tools, such as ChatGPT, independently. A further reason for failure is mismatched priorities when allocating AI budgets. Sales and marketing receive roughly 50% of the funding, even though back-office automation is more likely to yield significant, measurable returns.

    The ROI failure extends across all corporate layers, according to Björn Kolbeck, CEO and co-founder of Quobyte. He noted that some come from products forced onto users by CEOs or half-baked “AI features.” On the technical side, models are often delayed or undertrained due to weak infrastructure, with storage frequently being the primary bottleneck.

    “All suffer if you can’t feed GPUs at scale — [in terms of] memory, adaptability, and integration,” he told TechNewsWorld.

    Why Weak Storage Cripples AI ROI

    Björn Kolbeck, CEO and co-founder of Quobyte.
    Björn Kolbeck, CEO and co-founder of Quobyte

    Kolbeck sees companies investing billions while overlooking adequate storage to support their AI infrastructure as one of the major mistakes corporations make. He said that oversight leads to three key failure factors — festering silos, lack of performance, and uptime dilemmas.

    The most critical resource for AI is data training. When companies store data across multiple silos, data scientists lack access to essential details.

    “Storage systems must be able to scale and provide unified access to enable an AI data lake, a centralized and efficient storage for the entire company,” he observed.

    A lack of performance sets in when the storage system cannot keep up with the demands of the GPUs used for training or fine-tuning. This causes expensive resources to sit idle, frustrates data scientists, and delays projects.

    “Similarly, when storage solutions aren’t built for maximum performance and availability – like many HPC storage systems – you end up with the same problem: delayed projects,” he warned.

    Traditional Storage Risks Mission-Critical AI Workflows

    The MIT report noted that successful AI deployments integrate at scale. That requires fault-tolerant storage.

    Traditional storage usually means enterprise storage. While they are reliable, they cannot scale out, Kolbeck cautioned.

    “Early AI projects may work well, but as soon as these projects grow in size [as in more GPUs], these arrays tip over, and that’s when mission-critical workflows grind to a halt,” he said.

    Kolbeck explained the difference between scale-out architecture versus a scale-up approach as a better option for handling the massive and unpredictable data demands of modern AI and ML. He cited his company’s experience in making that transition.

    Quobyte provides a parallel file system that turns commodity servers into a high-performance, scalable storage solution. In the past, scale-up solutions have always failed.

    So Quobyte ended up with scale-out. He noted that the company saw the strain in HPC, where vector machines gave way to clusters, in computer chips, where modern CPUs are scale-out, and also in the cloud.

    The same principle applies to AI training. If you can’t scale out, you are limited in how many and how large models you can train or fine-tune.

    “The storage needs to keep up with this horizontal scaling. When you add GPUs, your storage needs to be able to scale out in lockstep,” he said.

    Resolving the Toughest Data Challenges

    AI workflows involve a mix of small and large files. Consider the massive performance requirements that arise when many GPUs access data in parallel, as well as the management of multiple users with varying requirements on the same storage system.

    “Developing and training AI technology is still a very experimental process and requires the infrastructure — including storage — to adapt quickly when data scientists develop new ideas,” Kolbeck noted.

    Real-time performance analytics are critical. Storage administrators need to be able to precisely identify how applications, such as training or other pipeline phases, impact the storage. Most data scientists lack deep visibility into storage, and storage administrators need this information to make informed decisions about how to modify, optimize, and expand the storage system.

    Quobyte’s policy-based data management engine rapidly adapts to changing business, user, and workload requirements, providing complete control. Users can change how and where they store files and organize them with a few clicks, he added.

    Solving Old-Tech Issues

    Kolbeck described traditional enterprise storage as built around 30-year-old technology, including the NFS protocol, which Sun Microsystems designed in 1984. This old-school approach cannot keep up with the scale-out requirements of AI.

    His favorite examples are Yahoo and Google. Yahoo built its infrastructure on NFS-based enterprise storage appliances. Google, on the other hand, built its entire infrastructure on software storage using distributed systems technology on cheap servers.

    “Thinking that the same recycled storage technology will now enable companies to run successful AI is more like wishful thinking,” he suggested.

    Building the infrastructure for successful AI projects requires thinking like a hyperscaler — a philosophy central to Quobyte’s approach. The company’s software-defined storage system applies distributed systems algorithms to deliver reliable performance on commodity servers, scaling seamlessly from a handful of machines to entire data centers.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Kavish
    • Website

    Related Posts

    Scaling IAM for the AI Age With Dynamic, Zero-Trust Access

    October 25, 2025

    A comprehensive list of 2025 tech layoffs

    October 25, 2025

    Musk’s ad chief at X departs after just 10 months

    October 25, 2025

    Nvidia RTX 5070 Founders Edition Review

    October 25, 2025

    Nvidia GeForce RTX 5060 Ti 16GB Review: A Future-Proof Mid-Range GPU That’s Worth the Upgrade

    October 25, 2025

    Apple’s New M5 Chip Is a Monster! 4X Faster AI Performance Leaves M4 in the Dust

    October 25, 2025

    Comments are closed.

    Top Reviews
    Editors Picks

    Scaling IAM for the AI Age With Dynamic, Zero-Trust Access

    October 25, 2025

    Weak Data Infrastructure Limits GenAI ROI

    October 25, 2025

    A comprehensive list of 2025 tech layoffs

    October 25, 2025

    Musk’s ad chief at X departs after just 10 months

    October 25, 2025
    About Us
    About Us

    Email Us: info@xarkas.com

    Facebook Pinterest
    © 2025 . Designed by Xarkas Technologies.
    • Home
    • Mobiles
    • Privacy Policy

    Type above and press Enter to search. Press Esc to cancel.

    Ad Blocker Enabled!
    Ad Blocker Enabled!
    Our website is made possible by displaying online advertisements to our visitors. Please support us by disabling your Ad Blocker.