Slider

AI Infrastructure Startup Pipeshift Raises $2.5 Mn in Seed Round Led by Y Combinator, SenseAI Ventures

This round was led by Y Combinator and SenseAI Ventures, with additional participation from Arka Venture Labs, Good News Ventures, Nivesha Ventures...
  • As companies rush to deploy open-source AI models, Pipeshift unveils a modular platform that makes deployment 30x more efficient.
Over 80% of enterprises are turning to open-source AI models. However, the challenge isn't accessing powerful models, rather it's deploying them efficiently and securely. Today, companies must stitch together more than 10 different components just to begin deployment, with each optimization requiring thousands of engineering hours. AI infrastructure startup Pipeshift announced their $2.5M seed round to solve this challenge, launching a new-age Platform-as-a-Service (PaaS) that enables engineering teams to orchestrate AI workloads across any infrastructure - cloud or on-premises - with unprecedented speed and control.

AI Infrastructure Startup Pipeshift Raises $2.5 Mn in Seed Round Led by Y Combinator, SenseAI Ventures
 L to R,  Pranav Reddy, Arko Chattopadhyay, Enrique Ferrao

This round was led by Y Combinator and SenseAI Ventures, with additional participation from Arka Venture Labs, Good News Ventures, Nivesha Ventures, Astir VC, GradCapital, and MyAsiaVC. Seasoned Silicon Valley angels like Kulveer Taggar (CEO of Zuess), Umur Cubukcu (CEO of Ubicloud and former Head of PostgreSQL at Azure), and Krishna Mehra (former Head of Engineering at Meta and co-founder of Capillary Technologies) also joined the round.

Unlike existing players who are GPU-brokers offering one-size-fits-all solutions, Pipeshift understands the enterprise need for control and flexibility of infrastructure and offers an end-to-end MLOps stack for enterprises to train, deploy, and scale open-source GenAI models — LLMs, vision models, audio models, and image models — across any cloud or on-prem GPUs. As a result, enterprises can deploy their AI workloads in production faster and more reliably. Additionally, as we see more model and hardware architectures coming into the market, Pipeshift future-proofs infrastructure investments by offering flexibility through their modular MLOps stack that allows enterprises to bring down their GPU infrastructure costs without any additional engineering effort.

2025 marks the year when GenAI transitions into production, and engineering teams are witnessing the benefits of using open-source models in-house. This offers high levels of privacy and control alongside enhanced performance and lower costs. However, this is a complex and expensive process involving multiple components being stitched together.” said Arko Chattopadhyay, Co-Founder and CEO of Pipeshift. He added “Pipeshift's enterprise-grade orchestration platform eradicates the need for such extensive engineering investments by not only simplifying deployment but also maximizing the production throughput."

Enterprises prefer open-source GenAI for the benefits of privacy, model ownership, and lower costs. However, transitioning GenAI to production remains a complex and expensive process requiring multiple components to be stitched,” said Rahul Agarwalla, Managing Partner of SenseAI Ventures. He added, “Pipeshift's enterprise-grade orchestration platform eliminates the need for such extensive engineering investments by not only simplifying deployment but also maximizing the production throughput.”

The timing for Pipeshift is significant. As AI reshapes markets and redefines competition, companies know the rewards for seizing the moment are immense. However, security and data privacy risks loom large, demanding protection for proprietary IP. These challenges compound in a rapidly evolving technology landscape where missteps lead to expensive delays and lost opportunities. Adding to this complexity is the uniqueness of every business problem. No two AI strategies are the same, and every deployment must align with the distinct needs of the organization. Pipeshift solves this by bringing in the flexibility and precision of open-source AI models and the scalability of their enterprise MLOps platform. Businesses overcome these challenges while managing resource demands and ensuring compliance — all without losing sight of their broader goals.

Having already worked with over 30 companies including NetApp, Pipeshift aims to become the trusted partner for organizations looking to unlock AI's potential while maintaining control of their infrastructure and data.

Anu Mangaly, Director of Software Engineering at NetApp said, “Pipeshift’s ability to orchestrate existing GPUs to deliver >500 tokens/second for models like Llama 3.1 8B without any compression or quantization of the LLM is extremely impressive, allowing businesses to reduce their compute footprint and costs in production, while delivering enhanced user experiences that are also private and secure.” She also shared, “At NetApp, we understood the enterprise need for a single data fabric across cloud, on-prem, and hybrid setup. Pipeshift's orchestration allows for enterprises to unlock the same potential from the new generation of AI models all within their infrastructure.”

Pipeshift offers an end-to-end MLOps stack for enterprises to train, deploy, and scale open-source GenAI models - LLMs, vision models, audio models, and image models - across any cloud or on-prem GPUs. Enterprises get to deploy their AI workloads in production faster and more reliably. Additionally, as we see more model and hardware architectures coming into the market, Pipeshift future-proofs the infrastructure investments by offering flexibility through their modular MLOps stack that allows enterprises to bring down their GPU infrastructure costs without any additional engineering efforts on their end.
Like this content? Sign up for our daily newsletter to get latest updates.
0

No comments

both, mystorymag

DON'T MISS

Health & Wellness, Climate Change, Environment
IndianWeb2.com © all rights reserved