Showing posts with label AI Cloud. Show all posts
Showing posts with label AI Cloud. Show all posts

NetApp Unveils New High-Performance EF-Series Models

NetApp Unveils New High-Performance EF-Series Models

NetApp® (NASDAQ: NTAP), the Intelligent Data Infrastructure company, today announced the release of the next generation of NetApp EF-Series storage systems, built to power the most performance‑intensive workloads at scale. The introduction of EF50 and EF80 helps enterprises and neoclouds meet the growing demands of AI, high-performance computing (HPC), and transactional databases, including in emerging use cases like sovereign AI clouds and AI-powered manufacturing.

Data is the key component to delivering business value for enterprises, underpinning performance hungry workloads like AI or databases,” said Sandeep Singh, Senior Vice President and General Manager of Enterprise Storage at NetApp. As businesses contend with ever-increasing data volumes and performance-intensive applications such as AI model training, AI inferencing and high-performance computing, they need infrastructure that delivers speed, scalability and efficiency without added complexity. NetApp delivers a comprehensive portfolio that addresses every stage of the AI data pipeline from collecting and preparing data to feeding it to GenAI models that produce business insights. With the new EF-Series systems, purpose-built for extreme performance, we’re enabling customers to deploy and scale high-throughput, low-latency workloads quickly and efficiently, while reducing data center footprint and operational overhead.”

Coupled with high-performance parallel file systems like Lustre or BeeGFS, the new EF50 and EF80 systems accelerate HPC simulations and keep GPUs fully utilized with high-performance scratch space, helping organizations unlock new value and competitive advantage at the right price. Customers ranging from neocloud providers driving AI innovation to movie studios managing massive media libraries will benefit from not only performance and scalability but also robust security measures to safeguard sensitive information and prevent data loss.

The new EF-Series storage systems deliver over 110GBps of read throughput and 55GBps of write throughput, a 250 percent improvement over previous generations. With a power efficiency of 63.7GBps per KW, and 1.5PB of storage in 2U, the new systems ensure reliable, high-performance with efficient rack density and an affordable cost. With EF-Series, organizations can:
  • Achieve high performance for data-intensive workloads, scaling capacity without sacrificing efficiency or latency
  • Balance budget requirements with high-performance needs to make better decisions faster
  • Simplify management and operational complexity with affordable block storage, streamlined deployment and support from NetApp’s technical experts.
As we navigate the AI era, many enterprises are finding that they need to maximize their raw performance to extract the most value from their data,” said Clayton Vipond, Senior Solution Architect at CDW.The refreshed NetApp EF-Series delivers the throughput and capacity businesses need to scale high-powered workloads that transform data into insights and outcomes.”

"NetApp's EF-Series systems give Teradata the storage performance needed to support our most demanding workloads," said Sumeet Arora, Chief Product Officer, Teradata. "We appreciate that NetApp continues to invest in this technology, and with the enhanced performance of the new models, we look forward to exploring opportunities to reduce infrastructure complexity and support the AI and data modernization initiatives our customers care about."

By delivering a high-performance storage system that supports parallel file systems like Lustre and BeeGFS, NetApp is making its mark as emerging industries, such as neocloud, emerge to support the AI-Era,” said Simon Robinson, Principal Analyst at Omdia. “Our research validates that AI workloads require a level of raw performance unmatched by any mainstream business workload to date. With the new EF-series systems, NetApp is delivering a solution that addresses the performance needs of large-scale AI projects, whether model training or inference.”

The updated EF-Series builds on decades of durability and reliability from NetApp. With more than 1 million installations, it has a proven track record that customers can rely on.

Additional Resources

L&T Vyoma and SRIT India Partner to Launch Sovereign Cloud for GovTech, Healthcare & Telecom

L&T Vyoma and SRIT India Partner to Launch  Sovereign Cloud for GovTech, Healthcare & Telecom

Larsen & Toubro-Vyoma, the AI-ready cloud and hyperscale data centre business vertical of L&T, has entered a strategic partnership with SRIT India Ltd, a Bengaluru-based IT solutions provider with expertise in large-scale e-health, e- governance, telecom and mission-critical systems.

With this collaboration, SRIT will introduce SRIT Cloud, a sovereign, India-localised cloud platform powered by Ministry of Electronics and Information Technology-compliant infrastructure from Vyoma. The solution is designed to meet the stringent requirements of public, joint and private sector entities in areas such as national security, governance, healthcare and telecom — ensuring data sovereignty, high availability and operational trust.

The partnership enables SRIT to offer a fully integrated application-to-infrastructure stack, powered by Vyoma’s sovereign cloud and hyperscale capabilities. They will deliver responsive scalability for high-growth digital public services, integrated support across applications and infrastructure, reduced downtime, improved performance for mission- critical systems and faster deployment of SRIT’s proven platforms in e-health, urban governance, intelligent transport management system and telecom.

SRIT will make its entire suite of platforms — including its Ayushman Bharat Digital Mission and National Accreditation Board for Hospitals-certified E-Health System — available on a SaaS model, enabling hospitals, clinics, laboratories and government institutions to adopt these solutions at lower cost and complexity.

Similarly, SRIT’s flagship GovTech applications — such as automated building plan approvals, municipal governance, fire permit systems and smart transport platforms — will now be offered on SaaS, PaaS and IaaS through SRIT Cloud powered by Vyoma.

Commenting on the collaboration, Seema Ambastha, Chief Executive – Larsen & Toubro- Vyoma, said: “This partnership combines our sovereign cloud infrastructure with SRIT’s proven expertise in GovTech and HealthTech. Together, we are enabling secure, high- performance cloud services built in India, for India, thus advancing the nation’s most critical digital missions”.

Dr Madhu Nambiar, Founder & MD – SRIT India Ltd, added: “Partnering with Vyoma gives us a sovereign, high-reliability cloud foundation to deliver integrated solutions with assured performance, faster rollouts and proactive risk mitigation — strengthening our ability to serve millions of citizens and thousands of institutions securely and efficiently”.

As India accelerates adoption of cloud-based governance, digital health ecosystems and citizen platforms, the Vyoma-SRIT collaboration directly supports three national priorities: data sovereignty, high availability and affordable digital transformation. It establishes one of the most comprehensive application + Infrastructure service models for GovTech and HealthTech in the country.

About Larsen & Toubro - Vyoma

Larsen & Toubro-Vyoma is L&T’s sovereign, secure, and integrated AI cloud and hyperscale data centre business, engineered to deliver AI-ready, high-density compute for India and global enterprises. Built on L&T’s legacy of trust, precision and engineering excellence, Vyoma offers sovereign cloud platforms, GPU-as-a-Service, hyperscale colocation and mission-critical digital infrastructure that powers government, BFSI, healthcare, manufacturing and high-compute industries worldwide. Website: https://larsentoubrovyoma.com

About SRIT India Limited

Founded in 1999 and headquartered in Bengaluru, SRIT India Limited is a leading IT solutions provider and systems integrator with deep expertise in E-Health, E-Governance, Intelligent Transport Management Systems, Telecom/MSP and enterprise-grade digital platforms. SRIT’s applications serve governments, state agencies, hospitals, telecom operators and private enterprises across India and international markets. Website: https://www.sritindia.com

Amazon Commits $12.7B to India’s AI Growth, Empowering 15M SMBs and 4M Students by 2030

Amazon Commits $12.7B to India’s AI Growth, Empowering 15M SMBs and 4M Students by 2030

Amazon has announced a massive $12.7 billion investment in India’s AI and cloud infrastructure by 2030, aiming to empower over 15 million small businesses and provide AI literacy to 4 million government-school students.

Key highlights of Amazon’s India AI push

  • Investment size: $12.7 billion allocated to local cloud and AI infrastructure by 2030.
  • Target audience:
    • 15 million SMBs: Access to AI tools like seller assistants, generative listing tools, creative ad studios, and video generators.
    • 4 million students: AI literacy and career awareness programs in government schools.
  • Government alignment: Supports India’s AI Mission, focusing on accessibility, productivity, and digital inclusion.
  • Infrastructure expansion: Strengthening AWS capacity in Telangana and Maharashtra, extending access to tier-3 entrepreneurs.
  • Consumer benefits: More personalized shopping on Amazon.in through deeper AI integration, including its assistant Rufus.

Strategic implications

  • For SMBs: AI tools deliver enterprise-grade intelligence and lower advertising/content creation costs.
  • For students: AI literacy prepares youth for future tech careers and broadens access beyond elite institutions.
  • For India’s tech ecosystem: Positions India as a global AI hub and accelerates innovation via Amazon’s infrastructure.

Risks and challenges

  • Digital divide: Rural and underserved access requires strong local government partnerships.
  • Data privacy and regulation: Compliance with India’s DPDP Act could be complex for AI-driven services.
  • Competition: Heavy investments from Reliance Jio, Google, and Microsoft intensify market battles.
  • Infrastructure bottlenecks: Power, connectivity, and cloud adoption in smaller towns may slow rollout.

Big picture

Amazon’s bet goes beyond e-commerce—embedding into India’s digital public infrastructure. Targeting SMBs and students frames AI as both a growth engine and social equalizer, potentially redefining India’s role in the global AI economy by 2030.

Comparative AI Investment Strategies in India

Amazon’s $12.7B AI investment in India is part of a broader race among tech giants including Microsoft, Google, and Reliance Jio. Each company is targeting different audiences and infrastructure priorities to shape India’s AI future.

Key Highlights

  • Amazon: Focused on SMB empowerment and student AI literacy, aligned with India’s AI Mission.
  • Microsoft: Driving enterprise adoption and workforce skilling through Copilot and cloud expansion.
  • Google: Building India’s largest AI hub in Vizag, supporting startups and deep-tech ecosystems.
  • Reliance Jio: Positioning as India’s homegrown AI champion with giga-scale data centers and consumer-first AI.

Comparative Table

Company Investment Size & Timeline Focus Areas Target Audience Infrastructure & Partnerships Strategic Positioning
Amazon $12.7B by 2030 AI tools for SMBs, AI literacy for students, cloud expansion 15M SMBs, 4M govt-school students AWS expansion in Telangana & Maharashtra; supports India’s AI Mission Embedding AI into commerce + education; democratizing access
Microsoft $3B over 2 years (2025–2027) Cloud + AI infra, skilling, Copilot ecosystem 10M people skilled by 2030; half a million by 2026 New data centers; AI Innovation Network; govt partnerships Positioning India as AI-first nation; enterprise + workforce skilling
Google ~$15B over 5 years (2026–2030) AI hub in Vizag; AI Futures Fund for startups; consumer AI stack Startups, developers, consumers AI hub in Andhra Pradesh; partnership with Accel Atoms; Adani $5B co-investment Building India’s largest AI infra hub; deep-tech startup ecosystem
Reliance (Jio) $11B by 2030 AI-native data centers; “Reliance Intelligence” GenAI push Enterprises, consumers, govt JV with Brookfield & Digital Realty; 1GW AI campus in Vizag Aggressive GenAI play; “AI everywhere for every Indian” vision


Key Takeaways

  • Amazon: Socially inclusive, targeting SMBs and students.
  • Microsoft: Enterprise adoption and workforce skilling.
  • Google: Infrastructure scale and startup ecosystems.
  • Reliance: Homegrown AI champion with giga-scale infra.

Risks & Challenges

  • Overlap in Vizag: Google and Reliance both building mega AI hubs in Visakhapatnam.
  • Regulatory compliance: Navigating India’s DPDP Act and evolving AI governance frameworks.
  • Digital divide: Amazon’s SMB/student rollout may face infra challenges in tier-3 towns.
  • Market saturation: $40B+ combined investments risk overcapacity unless demand scales rapidly.

Cisco and NVIDIA Introduce AI‑Native 6G Wireless Stack, Redefining Cloud and Enterprise Infrastructure

Cisco and NVIDIA Introduce AI‑Native 6G Wireless Stack, Redefining Cloud and Enterprise Infrastructure

Cisco and NVIDIA have announced a broad set of AI infrastructure innovations designed to accelerate adoption of artificial intelligence across cloud, enterprise, and telecom sectors. The collaboration brings together Cisco’s networking and security expertise with NVIDIA’s AI computing leadership, marking what executives described as the beginning of the “largest data center build‑out in history.”

Spectrum‑X Powered Switches

At the center of the announcement is the Cisco N9100 Series data center switch, the first NVIDIA partner‑developed switch built on NVIDIA Spectrum‑X Ethernet technology. The switch is designed to deliver high‑performance, low‑latency networking for AI workloads and will be available with both NX‑OS and SONiC operating models. Cisco said the platform will serve as a Cloud Partner‑compliant reference architecture, enabling neocloud and sovereign cloud providers to deploy AI infrastructure at scale.

Enterprise AI Security and Observability

Cisco also expanded its Secure AI Factory with NVIDIA, a framework that integrates compute, networking, security, and observability into enterprise AI deployments. The initiative aims to give organizations end‑to‑end visibility and protection as they scale AI workloads, particularly in regulated industries. New ecosystem partnerships were announced to strengthen monitoring and compliance capabilities.

Telecom and 6G Readiness

In a move aimed at telecom operators, Cisco and NVIDIA unveiled the industry’s first AI‑native wireless stack for 6G networks. The stack is designed to handle ultra‑low latency and massive device connectivity, preparing carriers for the surge in AI‑driven traffic expected over the next decade. Analysts said the development could redefine mobile networks by enabling real‑time AI services at the edge.

Strategic Context

Executives from both companies emphasized that the innovations are not standalone products but part of a joint reference architecture for next‑generation AI deployments. “We are entering a new era where AI workloads will reshape every industry,” said a Cisco spokesperson. “Our partnership with NVIDIA ensures customers have the flexibility, interoperability, and scalability to build AI infrastructure securely and globally.”

Why It Matters

  • For Cloud Providers: A unified, NVIDIA‑compliant architecture accelerates AI adoption in sovereign and neocloud environments.
  • For Enterprises: Enhanced security and observability ensure safer AI deployments.
  • For Telecoms: The AI‑native 6G stack positions operators to deliver next‑generation services.
With these announcements, Cisco and NVIDIA are positioning themselves at the heart of the global AI infrastructure race, targeting the needs of hyperscalers, enterprises, and telecom operators alike.

Cassava Taps Accenture to Scale Sovereign AI Across Africa

Cassava Taps Accenture to Scale Sovereign AI Across Africa
Strive Masiyiwa, Cassava Founder & Executive Chairman

Cassava Technologies, a pan-African digital infrastructure powerhouse, has announced a strategic collaboration with global consulting giant Accenture to accelerate the rollout of sovereign AI capabilities across Africa. The partnership marks a pivotal moment in the continent’s digital evolution—one that blends cutting-edge technology with local relevance, regulatory alignment, and inclusive innovation.

Building Africa’s AI Backbone

At the heart of the collaboration is a shared vision: to enable African nations to harness artificial intelligence on their own terms. Accenture will deploy its AI Refinery™ platform alongside Cassava’s GPU-as-a-Service (GPUaaS), powered by NVIDIA’s high-performance AI infrastructure. This fusion will allow AI workloads to be processed within national borders, ensuring compliance with local data governance laws and reinforcing digital sovereignty.

The rollout begins in South Africa, with plans to expand into Egypt, Kenya, Morocco, and Nigeria—leveraging Cassava’s ultra-low-latency fibre broadband network and energy-efficient data centres. These “AI factories” will be equipped with thousands of GPUs, enabling scalable, secure, and context-aware AI development across sectors.

Local Context, Global Capability

Unlike generic AI deployments, Cassava and Accenture are prioritizing localized solutions that reflect Africa’s linguistic diversity, cultural nuances, and economic realities. From agriculture and healthcare to mining, telecom, and financial services, the initiative aims to deliver AI applications that are not only powerful but also deeply relevant.
  • Cassava CEO Ahmed El Beheiry described the initiative as a “nation-building story with inclusion at its centre.
  • Accenture’s Mauro Macchi emphasized the opportunity to “reimagine operations” and “unlock new ways to create value” across the continent.

The Visionary Behind Cassava

This bold move is emblematic of the entrepreneurial ethos of Strive Masiyiwa, Cassava’s founder and executive chairman. A Zimbabwean-born billionaire and telecom pioneer, Masiyiwa is no stranger to building transformative infrastructure. He famously broke Zimbabwe’s telecom monopoly in the 1990s with Econet Wireless and has since become one of Africa’s most influential business leaders.
  • Masiyiwa is investing $720 million to build sovereign AI infrastructure across five African nations.
  • He serves on the boards of Netflix, the Gates Foundation, and National Geographic Society.
  • He is a signatory of the Giving Pledge, supporting education, public health, and youth empowerment.
His mantra: “Start small, think big.”—a call for Africa to become a creator, not just a consumer, of emerging technologies.

Trust, Compliance, and Inclusion

By keeping data within borders and tailoring AI to local realities, the Cassava–Accenture alliance aims to strengthen trust, foster compliance, and democratize access to advanced technologies. It’s a model that could inspire other regions grappling with the tension between global innovation and national sovereignty.

As Africa steps into the AI era, this partnership signals more than just technological progress—it’s a declaration of intent: to build, govern, and scale digital infrastructure that reflects the continent’s values, ambitions, and future.

Tech Mahindra Taps AMD to Power AI-Driven Infrastructure Across Hybrid and Multi-Cloud Ecosystems

Tech Mahindra Taps AMD to Power AI-Driven Infrastructure Across Hybrid and Multi-Cloud Ecosystems

Tech Mahindra (NSE: TECHM), a leading global provider of technology consulting and digital solutions to enterprises across industries, announced an agreement with AMD, the leader in high-performance and adaptive computing, to accelerate enterprise transformation through next-generation infrastructure, hybrid cloud, and AI adoption. The collaboration aims to empower enterprises across key sectors, including manufacturing, finance, telecommunications, and healthcare, to harness the full potential of AI-driven infrastructure.

Through this collaboration, Tech Mahindra will integrate AMD’s compute engines and infrastructure with its Cloud BlazeTech solution to drive AI adoption across enterprise workloads. It plans to develop new solutions to enable enterprises to optimize workloads across end-user devices, servers, and cloud infrastructure, including public, private, and hybrid environments. 

Mohit Joshi, CEO and Managing Director, Tech Mahindra, said, “Enterprises worldwide are scrambling to maximize ROI while navigating the complexity of hybrid and cloud-native ecosystems. Our strategic agreement with AMD is a step towards delivering next-generation hyper scalable solutions that seamlessly bridge on-site infrastructure with cloud-native capabilities. Through these solutions, we aim to enable customers to optimize performance across distributed environments without compromising speed, security, or control.”

Dr. Lisa Su, Chair and CEO, AMD said, “Together, AMD and Tech Mahindra will help enterprises accelerate their cloud transformation and AI adoption with the performance and efficiency they need to scale. By combining our EPYC processors and AMD Instinct accelerators with Tech Mahindra, we can create solutions that enable customers to deploy AI on compute infrastructure across hybrid and multi-cloud environments.”

Tech Mahindra and AMD are embarking on a multi-year collaboration with a comprehensive roadmap focused on infrastructure optimization and AI enablement. Leveraging leadership in compute and software capabilities from AMD, and Tech Mahindra's deep industry experience, this collaboration will empower customers to harness AI-driven innovation, delivering critical business value and operational outcomes.

About AMD

For more than 55 years AMD has driven innovation in high-performance computing, graphics and visualization technologies. Billions of people, leading Fortune 500 businesses and cutting edge scientific research institutions around the world rely on AMD technology daily to improve how they live, work and play. AMD employees are focused on building leadership high- 3 performance and adaptive products that push the boundaries of what is possible. For more information about how AMD is enabling today and inspiring tomorrow, visit the AMD (NASDAQ: AMD) website, blog, LinkedIn, Facebook and X pages. 

About Tech Mahindra 

Tech Mahindra (NSE: TECHM) offers technology consulting and digital solutions to global enterprises across industries, enabling transformative scale at unparalleled speed. With 148,000+ professionals across 90+ countries helping 1100+ clients, Tech Mahindra provides a full spectrum of services including consulting, information technology, enterprise applications, business process services, engineering services, network services, customer experience & design, AI & analytics, and cloud & infrastructure services. It is the first Indian company in the world to have been awarded the Sustainable Markets Initiative’s Terra Carta Seal, which recognizes global companies that are actively leading the charge to create a climate and nature-positive future. Tech Mahindra is part of the Mahindra Group, founded in 1945, one of the largest and most admired multinational federation of companies.

Wipro and CrowdStrike Expand Alliance to Launch AI-Powered CyberShield MDR


Organizations today face an overwhelming volume of alerts from siloed security tools that fail to stop adversaries. Fragmented security operations across endpoints, cloud workloads, identity, and data drive complexity, increase costs, and create operational blind spots. Wipro CyberShield MDR, powered by CrowdStrike Falcon® Next-Gen SIEM, addresses these challenges by enhancing threat visibility, simplifying operations, and strengthening resilience against evolving threats.

Falcon Next-Gen SIEM combines native Falcon platform and third-party data with real-time threat intelligence and AI-powered automation to supercharge threat detection and response across the enterprise. Leveraging Falcon Next-Gen SIEM and Wipro's global ecosystem – along with Wipro Ventures’ portfolio companies Simbian and Tuskira – CyberShield MDR delivers intelligent defense, proactive breach protection, continuous detection, and rapid response to keep organizations resilient and future-ready against AI-driven threats. Wipro’s cybersecurity experts will manage and host the services from eight Cyber Defense Centers (CDCs) strategically located around the globe.

“Wipro’s CyberShield platform, powered by CrowdStrike’s AI-native product suites and strengthened by our security ecosystem will help enterprises contain threats swiftly and ensure continuity of digital operations,” said Tony Buffomante, Senior Vice President & Global Head – Cybersecurity & Risk Services, Wipro Limited. “This integrated platform approach enables AI automated workflows, prevents lateral threat movement, and eliminates potential security gaps that fragmented solutions often miss.”

“The Falcon platform supercharges Wipro’s CyberShield Managed Security Services to deliver real-time attack detection, faster response and outcomes that stop breaches,” said Daniel Bernard, Chief Business Officer, CrowdStrike. “Together, we’re simplifying operations across Wipro’s ecosystem of partners — reducing costs, accelerating time-to-value and giving customers the confidence to stay ahead of today’s adversaries.”

Wipro CyberShieldSM MDR unified MSS will be launched at CrowdStrike Fal.Con 2025.

Wipro Limited (NYSE: WIT, BSE: 507685, NSE: WIPRO) is a leading AI-powered technology services and consulting company focused on building innovative solutions that address clients’ most complex digital transformation needs. Leveraging our holistic portfolio of capabilities in consulting, design, engineering, and operations, we help clients realize their boldest ambitions and build future-ready, sustainable businesses. Wipro Innovation Network, which brings together our clients, partners, academia, and tech communities, reflects our commitment to client-centric co-innovation. As a part of this, the Innovation Labs and Partner Labs, located across the globe, allow us to collaborate with clients to solve real-world challenges and showcase cutting-edge industry solutions that explore the future of technology. With over 230,000 employees and business partners across 65 countries, we deliver on the promise of helping our customers, colleagues, and communities thrive in an ever-changing world. For additional information, visit us at www.wipro.com.

TCS and C-DAC Join Forces to Build India’s Own Cloud

TCS and C-DAC Join Forces to Build India’s Own Cloud

In a major stride toward digital sovereignty, Tata Consultancy Services (TCS) has signed a Memorandum of Understanding (MoU) with the Centre for Development of Advanced Computing (C-DAC) to co-develop technologies that will form the backbone of India’s sovereign cloud infrastructure.

The collaboration aims to create a secure, scalable, and AI-enabled cloud ecosystem tailored to the needs of India’s public sector, including critical applications in healthcare, emergency response, and governance.

A Cloud Built for India, by India

The sovereign cloud initiative is designed to ensure that sensitive government data and citizen services remain within national borders, aligning with India’s growing emphasis on data localization and digital autonomy. The platform will be built on OpenStack architecture, enhanced by indigenous innovations from C-DAC and enterprise-grade deployment capabilities from TCS.
This partnership marks a pivotal moment in India’s journey toward technological self-reliance,” said a senior official from C-DAC. “Together, we’re building a cloud that reflects India’s values, priorities, and security needs.”

Real-World Impact

The sovereign cloud will host mission-critical applications such as:
  • e-Sanjeevani: India’s national telemedicine service
  • Dial 112: Emergency response systems
  • Smart city platforms and defence-grade workloads
  • Banking and financial services requiring high compliance and data protection
By leveraging TCS’s global cloud expertise and C-DAC’s research capabilities, the partnership is expected to accelerate deployment timelines and ensure robust performance across sectors.

Strategic Significance

The move comes amid growing global concerns over data privacy and dependency on foreign hyperscalers. India’s sovereign cloud is seen as a cornerstone of the Digital India mission, reinforcing national cybersecurity and enabling interoperable, cost-effective cloud services for government agencies.

Industry analysts view this as a model for other nations seeking to balance innovation with sovereignty. With this MoU, India signals its intent to lead in ethical, secure, and inclusive cloud infrastructure development.

Google and Reliance Unveil Dedicated Cloud Region in Jamnagar to Power India’s AI Future

Google and Reliance Unveil Dedicated Cloud Region in Jamnagar to Power India’s AI Future

In a landmark announcement at Reliance Industries’ 48th Annual General Meeting, Google CEO Sundar Pichai revealed the launch of a dedicated Google Cloud region in Jamnagar, built exclusively for Reliance. The move marks a pivotal step in India’s digital transformation, aimed at accelerating AI adoption across industries and democratizing access to advanced computing infrastructure.

Purpose-Built for AI Innovation

The Jamnagar region will host Google Cloud’s latest-generation AI hypercomputer, offering full-stack environments for generative AI development, model training, and enterprise deployment. Designed and powered by Reliance, the facility will run entirely on green energy, aligning with the company’s sustainability goals.
This region is purpose-built to support India’s AI ambitions — from large enterprises to kirana stores, said Sundar Pichai.
“It’s a new chapter in India’s technology journey,” added Mukesh Ambani.

Infrastructure Highlights

  • Hypercomputer Deployment: Optimized for large-scale generative models and AI-powered applications
  • Green Energy Backbone: Powered by Reliance’s renewable energy assets
  • Jio Fiber Integration: High-capacity connectivity linking Jamnagar to metros like Mumbai and Delhi
  • Secure Data Environments: Designed for enterprise-grade governance and compliance

Strategic Impact

The Jamnagar region will serve as a launchpad for AI-first services across sectors including:
  • Retail, telecom, energy, and financial services
  • Startups, SMBs, and public sector organizations
  • Developers and researchers building India-centric AI solutions
This initiative complements Reliance’s newly launched Reliance Intelligence, a wholly owned subsidiary focused on building consumer and enterprise-grade AI products.

National Significance

The announcement aligns with India’s broader push for sovereign AI infrastructure under the ₹10,370 crore IndiaAI Mission. By localizing compute power and enabling scalable AI deployment, the Jamnagar region positions India as a serious contender in the global AI race.

What’s Next

The cloud region is expected to go live in early 2026, with pilot deployments already underway in Reliance’s retail and telecom verticals. Analysts view this as a strategic convergence of infrastructure, innovation, and national ambition — one that could redefine India’s digital economy.

Tata Communications, AWS Unveil One of India’s Largest AI-Optimized Network Deployments

Tata Communications, AWS Unveil One of India’s Largest AI-Optimized Network Deployments

Tata Communications, a leading global communications technology player, in collaboration with Amazon Web Services (AWS), an Amazon.com, Inc. company, announced that the companies will enable an advanced AI-ready network in India. The strategic collaboration will establish a high-capacity, resilient long-distance network connecting three major AWS infrastructure locations to bolster generative AI adoption and cloud innovation in India.

The collaboration marks one of the India’s largest ever network deployments by Tata Communications in terms of size, scale and bandwidth. AWS has two data centre Regions in India located in Mumbai and Hyderabad, and AWS Direct Connect and AWS Edge Network infrastructure in Chennai. The network will connect AWS infrastructure in Mumbai, Hyderabad, and Chennai through a comprehensive, national long-haul network, creating a powerful infrastructure backbone for AI and machine learning (ML) workloads across India.

Key highlights of the partnership:
  • Next-Generation Network Connectivity: Leverage Tata Communications’ state-of-the-art network to provide high-bandwidth, low-latency connections essential for AI workloads.​ AWS will continue to deploy its custom network technologies on this network, enabling industry-leading security, availability, and performance between AWS locations
  • Enablement of AI-Powered Applications: Further enable businesses across India to build, train, and deploy scalable AI applications, fostering innovation in sectors like healthcare, finance, and education​
  • Commitment to Security and Compliance: Ensure robust security measures and adhere to regulatory standards to protect data integrity and privacy
The new network will help provide leading network performance and scalability that are critical for next-generation AI applications. By leveraging Tata Communications state-of-the-art network, AWS will further empower Indian businesses to develop Gen AI applications and train AI models, with unprecedented speed and efficiency. The network will feature express routes with ultra-low latency, helping ensure seamless data transfer and processing capabilities essential for compute-intensive AI and ML workloads.

This association marks our largest ever National Long-Distance program and showcases Tata Communications’ unparalleled capability to support large-capacity, complex projects requiring scaled network solutions,” said Genius Wong, Executive Vice President, Core and Next-Gen Connectivity Services and Chief Technology Officer, Tata Communications.AI is transforming industries globally, and our collaboration with AWS positions us at the forefront of this revolution in India. Together, we’re enabling a network that not only meets the current demands but anticipates the needs of tomorrow. By building a tailored network solution we’re ushering in an AI era in India, reinforcing our position as the long-term partner of choice for global technology leaders.”

We are excited to work with Tata Communications to establish an advanced in-country network in India,” said Jesse Dougherty, Vice President for Network Edge Services at Amazon Web Services. ” The infrastructure is designed to support the most data intensive workloads, like 5G, generative AI, and high-performance computing. This collaboration with Tata Communications will further enable our customers in India to innovate at scale with cloud and generative AI, and drive growth in India’s rapidly expanding digital economy.”

L&T-Cloudfiniti Forms Strategic Partnerships with 3 Leading AI Startups

L&T Cloudfiniti Forms Strategic Partnerships with Leading AI Startups
To drive innovation in healthcare, life sciences, and vertical AI solutions in India and across the globe

L&T-Cloudfiniti, a leading technology solutions provider, is proud to announce new strategic partnerships with three leading AI startups, including one based in Europe.

The collaborations will focus on groundbreaking developments in healthcare, life sciences, vertical AI, and conversational technologies in India and across the globe by harnessing cutting-edge AI models to transform key industries and drive digital innovation in multiple sectors.

The three partnerships that L&T-Cloudfiniti has got into are:
  • Hanooman AI (Healthcare & Life Sciences): L&T-Cloudfiniti has partnered with Hanooman AI, a pioneering AI startup in the healthcare and life sciences space. This partnership will leverage Hanooman’s advanced AI-powered tools to accelerate healthcare transformation in India. By integrating AI-driven insights into healthcare practices, Hanooman is poised to improve patient outcomes, optimise treatment pathways, and advance medical research in life sciences.
  • CoRover (Conversational & Attentive AI): L&T-Cloudfiniti has also teamed up with CoRover, an AI-driven startup focussed on creating conversational AI and foundational models (like BharatGPT). CoRover’s innovative solutions offer the ability to enhance user experiences with more natural, human-like conversations across various sectors, including customer service, education, and more. With this collaboration, L&T-Cloudfiniti aims to bring real- time, personalised communication and AI-enabled attentiveness to the forefront of businesses in India.
  • Pidima AI (Agentic AI for Regulated Industries): The third partnership is with Pidima, a UK- based startup revolutionising mission-critical industries with its Agentic AI platform. By automating test specification and compliance documentation, Pidima delivers 10x faster outcomes, reduces costs by millions, and elevates efficiency to extraordinary heights. Pidima’s solutions are designed for regulated sectors such as healthcare, Medtech, automotive, and aerospace, where precision and compliance are non-negotiable.The collaboration will significantly enhance L&T-Cloudfiniti’s AI offerings in these critical domains, paving the way for smarter, more efficient, and highly compliant operations.
Commenting on the development, Ms Seema Ambastha, Chief Executive, L&T-Cloudfiniti, said: “These collaborations reflect our commitment to driving AI adoption across industries, from healthcare to aerospace, by partnering with the brightest minds and the most innovative companies in the AI landscape.The collective expertise and disruptive technologies from these startups will play a crucial role in shaping the future of AI and will enable L&T-Cloudfiniti to provide cutting-edge solutions that deliver tangible business outcomes for clients globally.”

Vishnuvardhan Pogunulu Srinivasulu, CEO & Founder, Hanooman AI, commented: “Partnering with L&T-Cloudfiniti, Hanooman Al pioneers generative healthcare solutions - scalable, secure, and globally compliant. With Cipher Al, we're reimagining care for Bharat, making it accessible while advancing precision medicine for the world, sparking a revolution in global health outcomes. From reversing diabetes to discovering new drugs to deciphering genomics the future of healthcare is intelligent, inclusive, and is here.”

Ankush Sabharwal, Founder & CEO, CoRover AI, added: “Our collaboration with L&T- Cloudfiniti allows us to rapidly scale our conversational AI solutions on secure, high- performance GPU infrastructure, reaching global enterprises effectively. Together, we aim to redefine customer interactions, drive operational excellence, and deliver exceptional business value.”

John Marcus, Founder & CEO of Pidima AI, shared: “We are thrilled to partner with L&T- Cloudfiniti, a company that shares our vision of transforming enterprise efficiency through AI. This collaboration not only strengthens our presence in India but also accelerates our mission to empower mission-critical enterprises with smarter, faster, and more precise solutions."

IBM Teams Up with NVIDIA to Supercharge AI Development on the Cloud

IBM Teams Up with NVIDIA to Supercharge AI Development on the Cloud

IBM has announced a collaboration with NVIDIA to enhance AI capabilities at scale. This partnership focuses on integrating NVIDIA's AI Data Platform technologies with IBM's offerings, such as IBM Fusion and watsonx.

Key highlights of this collaboration include:
  • Content-Aware Storage (CAS): IBM plans to introduce CAS in its hybrid cloud infrastructure, enabling enterprises to process unstructured data more effectively for AI applications like retrieval-augmented generation (RAG) and reasoning.
  • Enhanced AI Accessibility: IBM aims to integrate its watsonx platform with NVIDIA's technologies, allowing organizations to develop and deploy AI models across various cloud environments.
  • Support for Compute-Intensive Workloads: IBM Cloud will expand its NVIDIA accelerated computing portfolio, including the availability of NVIDIA H200 Tensor Core GPU instances, designed for high-performance AI workloads.
This collaboration is expected to drive innovation in generative AI and agentic AI applications.

A 2024 IBM report found that more than three in four executives surveyed (77 percent) say generative AI is market-ready, up from just 36 percent in 2023. With this push to put AI into production comes an increased need for compute and data-intensive technologies. The collaboration between IBM and NVIDIA will enable IBM to provide hybrid AI solutions that take advantage of open technologies and platforms while also supporting data management, performance, security, and governance.

"IBM is focused on helping enterprises build and deploy effective AI models and scale with speed," said Hillery Hunter, CTO and General Manager of Innovation, IBM Infrastructure. "Together, IBM and NVIDIA are collaborating to create and offer the solutions, services and technology to unlock, accelerate, and protect data – ultimately helping clients overcome AI's hidden costs and technical hurdles to monetize AI and drive real business outcomes."

"AI agents need to rapidly access, fetch and process data at scale, and today, these steps occur in separate silos," said Rob Davis, vice president, Storage Networking Technology, NVIDIA. "The integration of IBM's content-aware storage with NVIDIA AI orchestrates data and compute across an optimized network fabric to overcome silos with an intelligent, scalable system that drives near real-time inference for responsive AI reasoning."

To learn more about IBM's presence at GTC, please visit https://www.nvidia.com/gtc/session-catalog/?search.suggestedaudiencelevel=1732117107498003nOoA&search=ibm#/

Red Hat Enhances Security and Virtualization Experience with Latest Version of Red Hat OpenShift

Red Hat Enhances Security and Virtualization Experience with Latest Version of Red Hat OpenShift

Red Hat, Inc., the world's leading provider of open source solutions, today announced the general availability of Red Hat OpenShift 4.18, the latest version of the industry’s leading hybrid cloud application platform powered by Kubernetes. Red Hat OpenShift 4.18 introduces new features and capabilities designed to streamline operations and security across IT environments and deliver greater consistency to all applications, from cloud-native and AI-enabled to virtualized and traditional.

According to the Gartner® press release Top Trends Impacting Infrastructure and Operations for 2025, revirtualization/devirtualization is one of the top trends facing organizations for 2025. As shifts in the virtualization market require organizations to reevaluate their virtualized infrastructure and strategies, for many it is an opportunity to implement technologies that will both deliver on their current IT requirements as well as help them meet the needs of tomorrow. The latest enhancements to Red Hat OpenShift are designed to simplify the management of virtual machines and containers while providing organizations with a common infrastructure to bring their generative AI (gen AI) plans to life.

Enhanced virtualization experience

Red Hat OpenShift 4.18 introduces new virtualization enhancements that improve networking, simplify storage migration, and streamline VM management. These updates reduce operational complexity, enhance flexibility and improve resource efficiency -- making it easier to manage and adapt virtualized environments as needs evolve.

VM-friendly networking provides support for common VM networking use cases with the general availability of user-defined networks, making it easier for users to get their virtualization platform up and running. Also available with OpenShift on AWS and Red Hat OpenShift Service on AWS, this allows users to have similar networking capabilities for secondary networks on AWS as they do on-premises, allowing for more hybrid cloud flexibility.

VM storage migration, available as a technology preview, now includes additional enhancements that allow for non-disruptive movement of data between storage devices and storage classes while a VM is running, enabling users to be more agile as storage needs change.

Tree-view navigation, available as a technology preview, enables users to logically group VMs into folders which allows for a more granular grouping. Additionally, with logical grouping, also available as a technology preview, users have a quicker and easier way to navigate between VMs using a single click.

Red Hat OpenShift 4.18 also enhances user-defined networks with Border Gateway Protocol (BGP), which improves segmentation and supports advanced use cases like VM static IP assignment, live migration and stronger multi-tenancy.

Extending choice for greater hybrid cloud innovation

Red Hat OpenShift 4.18 expands support to additional public cloud providers, providing users with increased flexibility for how and where they choose to run their workloads. Red Hat OpenShift now supports bare-metal deployments on Google Cloud and Oracle Cloud Infrastructure. Additionally, for users looking for virtualization in the public cloud, Red Hat OpenShift Virtualization is now available on Oracle Cloud Infrastructure as a technology preview.

Simplified operations for security

Red Hat OpenShift 4.18 introduces new security features designed to help drive more resilient operations while decreasing potential risks. Secret store container storage interface (CSI) driver is now generally available and provides users with a vendor-agnostic solution for managing credentials and sensitive information for applications. Workloads on Red Hat OpenShift can access external secrets managers without storing secrets on the cluster, enhancing overall security hygiene and simplifying credential management. This allows for clusters to remain unaware of secrets, thereby further reducing risk. Additionally, Secret Store CSI Driver enhances complementary solutions, such as OpenShift GitOps and OpenShift Pipelines, by enabling them to consume secrets from an external secrets manager in a more secure way.

Availability

Red Hat OpenShift 4.18 is now generally available. More information, including how to upgrade to the latest version, is available here.

Mike Barrett, vice president and general manager, Hybrid Cloud Platforms, Red Hat
Many organizations have reached an inflection point with their virtualized infrastructure, needing to make decisions quickly on their future direction. Red Hat OpenShift meets today’s virtualization needs and offers a simplified pathway to migration, but also enables organizations to keep an eye on the future via application modernization. With Red Hat OpenShift, organizations are able to protect their traditional investments while adopting a platform that enables them to seamlessly transition to an AI future.

Additional Resources


Learn more about Red Hat OpenShift 4.18

Read the blog about Red Hat OpenShift 4.18 What’s new for developers in Red Hat OpenShift 4.18

About Red Hat, Inc.

Red Hat is the world’s leading provider of enterprise open source software solutions, using a community-powered approach to deliver reliable and high-performing Linux, hybrid cloud, container, and Kubernetes technologies. Red Hat helps customers integrate new and existing IT applications, develop cloud-native applications, standardize on our industry-leading operating system, and automate, secure, and manage complex environments. Award-winning support, training, and consulting services make Red Hat a trusted adviser to the Fortune 500. As a strategic partner to cloud providers, system integrators, application vendors, customers, and open source communities, Red Hat can help organizations prepare for the digital future.

L&T-Cloudfiniti Onboards Its First Major Customer

L&T-Cloudfiniti Onboards Its First Major Customer

L&T-Cloudfiniti, the Data Centre business initiative of Larsen & Toubro, has onboarded its first major customer at the state-of-the-art hyperscale data centre located in Sriperumbudur near Chennai.

Cloudfiniti’s Sriperumbudur Data Centre has a built-in capacity of 30 MW, and of this, 12 MW colocation-ready capacity is live across two floors. The client, a leading cloud service provider has rented as much as 6 MW IT Load capacity, consisting of high-density racks spread over an entire floor, and a bulk bandwidth, thus marking a major customer-win for Cloudfiniti at the very start.

The contract tenure, which is 10 years, underscores the trust the client has reposed on Cloudfiniti’s high-tech capabilities, cutting-edge infrastructure, and the strategic location of the data centre.

“This deal marks the beginning of many such collaborations and acts a testament to our commitment to delivering world-class colocation and cloud services to businesses across the spectrum. In the days to come, we are confident of redefining India’s data centre landscape with our fast scalable and reliable solutions,” said Seema Ambastha, Chief Executive – L&T-Cloudfiniti.

KPMG Invests $100 Mn in Its Alliance With Google Cloud

KPMG Invests $100 Mn in Its Alliance With Google Cloud

KPMG has announced a $100 million investment in its alliance with Google Cloud to accelerate the adoption of generative Al, data analytics, and cybersecurity among Fortune 500 companies and global enterprises.

This expanded partnership is expected to drive $1 billion in incremental growth for KPMG.

The investment will focus on developing new solutions to help clients solve complex business challenges, with an initial emphasis on data modernization and responsible Al adoption in sectors like consumer and retail, healthcare, and financial services.

KPMG and Google Cloud announced their alliance in April 2024 when KPMG established a Google Cloud Center of Excellence (CoE) to align its product development, industry expertise, and technical resources for enterprises. 

KPMG said that bookings for KPMG's Google Cloud practice have increased tenfold over the past two years, reflecting the growing demand for cloud and AI solutions.

The expanded alliance will focus on data modernization and responsible AI adoption across various industries, with an initial emphasis on consumer and retail, healthcare, and financial services.

The alliance will bring Vertex AI and Gemini models to financial services clients, helping automate processes like fraud detection, financial crime detection, and commercial lending.

Besides this alliance with Google, KPMG has also formed alliances with other major cloud service providers to enhance its Al and digital solutions offerings. For instance, in July 2023, KPMG announced a $2 billion commitment over five years to expand its Al and cloud services through a partnership with Microsoft. This collaboration aims to integrate Microsoft's Al tools, such as OpenAl Service and Azure Al Search, into KPMG's proprietary generative Al tool.

Additionally, other Big Four accounting firms like PwC, Deloitte, and EY have also built partnerships with various tech giants, including Google Cloud, Microsoft, and Amazon Web Services (AWS), to leverage Al and other digital solutions for their clients.

L&T To Acquire 21% Stake in E2E Networks for ₹1, 407 Crore

L&T To Acquire 21% Stake in E2E Networks for ₹1, 407 Crore

Larsen & Toubro (L&T) has announced it will acquire a 21% stake in E2E Networks for ₹1,407 crore. This strategic move aims to bolster L&T's presence in the rapidly expanding cloud and Al sectors.

The acquisition will be completed in two parts: a 15% stake via preferential allotment and an additional 6% through a secondary acquisition. L&T will invest ₹1,079.27 crore for a 15% stake through preferential allotment and ₹327.75 crore for an additional 6% stake through a secondary acquisition.

Post-acquisition, L&T will have the right to nominate up to two directors on E2E Networks' board, ensuring they have a say in the company's strategic direction.

Alongside the acquisition, L&T will enter into a software license agreement, reseller agreement, and colocation agreement with E2E Networks.

This partnership is expected to accelerate digital transformation across various industries in India by integrating E2E Networks' cloud and Al cloud platform with L&T's expertise in data center management and cloud solutions.

E2E Networks' shares surged by 5% following the announcement, reflecting positive market sentiment about the deal.

Post-acquisition, E2E Networks' promoters, Tarun Dua and Srishti Baweja, will still hold a significant stake in the company.

This collaboration is expected to foster a technology-driven, sustainable future for India by promoting the adoption of GenAI solutions and enhancing cloud services.

Infosys To Rake In $100 Mn from Coca-Cola's $1.1 Bn Cloud Migration Deal with Microsoft

Infosys To Rake In $100 Mn from Coca-Cola's $1.1 Bn Cloud Migration Deal with Microsoft

Infosys is set to rake in over $100 million as a key supporting partner in Coca-Cola's $1.1 billion cloud migration deal with Microsoft. This partnership, announced in April, involves Coca-Cola migrating its operations to Microsoft's Azure cloud platform, with Infosys playing a significant role in the process.

Coca-Cola’s $1.1 billion cloud migration deal with Microsoft is a five-year strategic partnership aims to accelerate Coca-Cola’s adoption of cloud and generative AI technologies. The collaboration will leverage Microsoft’s Azure cloud platform and its generative AI capabilities.

This deal highlights the growing importance of cloud and AI technologies for global enterprises and underscores the strategic role Indian IT service providers like Infosys play in these major technology transformations.

Coca-Cola plans to migrate its applications and workloads to Microsoft Azure. This includes exploring innovative AI use cases across various business functions, such as marketing, manufacturing, and supply chain management. The partnership will focus on developing and testing new AI-powered solutions, including the Azure OpenAI Service and Copilot for Microsoft 365. These technologies are expected to enhance workplace productivity, streamline operations, and foster innovation.

Coca-Cola's European subsidiary, Coca-Cola Euro Pacific Partners PLC, disclosed in a recent filing with the US Securities and Exchange Commission (SEC) that it has committed €25 million (approximately $27 million) to Infosys for cloud migration services in the Euro Pacific region.

Given Infosys' involvement in this and potentially other geographies, industry experts partner e the company could surpass the $100 million mark from the broader Coca-Cola–Microsoft deal.

Coca-Cola’s migration to the Azure cloud will involve its core operations and major independent bottling partners worldwide. This move is part of a broader effort to align Coca-Cola’s technology strategy with cutting-edge innovations.

Infosys is a key supporting partner in this deal, earning over $100 million for its role in the migration process. Infosys will assist with cloud strategy, migration execution, application modernization, security, and ongoing support.

Post-migration, Infosys will provide ongoing support and optimization services to ensure that Coca-Cola’s cloud environment remains efficient, secure, and cost-effective.

AMD Acquires ZT Systems for $4.9 Billion To Expand Its Data Center AI Capabilities

AMD Acquires ZT Systems for $4.9 Billion To Expand Its Data Center AI Capabilities

On Monday, AMD announced that it has signed of a definitive agreement to acquire ZT Systems, a leading provider of AI infrastructure for the world’s largest hyperscale computing companies, in a cash and stock transaction valued at $4.9 billion, inclusive of a contingent payment of up to $400 million based on certain post-closing milestones.

AMD expects the transaction to be accretive on a non-GAAP basis by the end of 2025.

This acquisition is expected to significantly enhance AMD's capabilities in the data center and AI markets. By integrating ZT Systems' expertise in custom server solutions, AMD aims to provide more comprehensive and innovative solutions to meet the growing demands of cloud and AI infrastructure.

This acquisition also positions AMD to better compete with other major players in the industry, such as Intel and NVIDIA, by expanding its market reach and strengthening its product offerings.

Upon completion of the acquisition, ZT Systems will join the AMD Data Center Solutions Business Group. ZT CEO Frank Zhang will lead the manufacturing business and ZT President Doug Huang will lead the design and customer enablement teams, both reporting to AMD Executive Vice President and General Manager Forrest Norrod.

ZT Systems is a prominent provider of server solutions, specializing in creating cloud-enabling server infrastructure for leading cloud and telecom providers. They design, manufacture, and deploy custom solutions that balance cost, capability, and creativity to meet complex server needs.

Founded in 1994, by Frank Zhang, and based in New Jersey, ZT Systems focuses on hyperscale cloud computing, cloud storage, artificial intelligence, and machine-to-machine transactions.

Initially, ZT Systems focused on providing custom server solutions for various industries. Their emphasis on quality and customization helped them build a strong reputation. Over the years, ZT Systems expanded its capabilities to address the needs of hyperscale cloud computing, AI, and telecom providers. The company leveraged its engineering expertise and global manufacturing capabilities to deliver high-performance, cost-effective solutions. Headquartered in Secaucus, New Jersey, the company continuously innovated, adapting to the rapidly changing technology landscape. It focused on developing solutions for complex compute, storage, and accelerator needs, ensuring they stayed ahead of industry trends.

Besides, ZT Systems formed strong partnerships with leading technology suppliers like NVIDIA and Intel, enhancing their ability to provide cutting-edge server solutions.

Last year in October, ZT Systems announced the acquisition of a new manufacturing site in the Greater Austin, Texas area. This facility is expected to bolster their production capabilities and support the growing demand for their advanced server solutions.

Accenture Estimates That AWS Can Help Indian Organisations Reduce Associated Carbon Emissions by Upto 99% Compared to On-Premises

Accenture Estimates That AWS’s Global Infrastructure is Upto 4.1 Times More Efficient Than On-Premises

AWS can help Indian organisations reduce carbon emissions of AI workloads

New study estimates workloads optimised on Amazon Web Services (AWS) can help organisations in India reduce associated carbon emissions by up to 99% compared to on-premises

A new study commissioned by Amazon Web Services (AWS) and completed by Accenture shows that an effective way to minimise the environmental footprint of leveraging Artificial Intelligence (AI) is by moving IT workloads from on-premises infrastructure to AWS cloud data centres in India and around the globe. Accenture estimates that AWS’s global infrastructure is up to 4.1 times more efficient than on-premises. For Indian organisations, the total potential carbon reduction opportunity for AI workloads optimised on AWS is up to 99% compared to on-premises data centres.

The research states that simply utilising AWS data centres for compute-heavy, or AI, workloads in India yields a 98% reduction in carbon emissions compared to on-premises data centres. This is credited to AWS’s utilisation of more efficient hardware (32%), improvements in power and cooling efficiency (35%), and additional carbon-free energy procurement (31%). Further optimising on AWS by leveraging purpose-built silicon can increase the total carbon reduction potential of AI workloads to up to 99% for Indian organisations that migrate to and optimise on AWS.

Optimizing workloads on AWS can lower customers' associated carbon footprint by up to 99%

Carbon emissions reduction and energy efficiency by moving to AWS


Considering 85% of global IT spend by organisations remains on-premises, a carbon reduction of up to 99% for AI workloads optimised on AWS in India is a meaningful sustainability opportunity for Indian organisations,” said Jenna Leiner, Head of Environment Social Governance (ESG) and External Engagement, AWS Global. “As India accelerates towards its US$1 trillion-dollar digital opportunity and encourages investments into digital infrastructure, sustainability innovations and minimising IT related carbon emissions will be critical in also helping India meet its net-zero emissions by 2070 goal. This is particularly important given the rising adoption of AI. AWS is constantly innovating for sustainability across our data centres —optimising our data centre design, investing in purpose-built chips, and innovating with new cooling technologies - so that we continuously increase energy efficiency to serve customer compute demands.”

This research shows that AWS's focus on hardware and cooling efficiency, carbon-free energy, purpose-built silicon, and optimized storage can help organizations reduce the carbon footprint of AI and machine learning workloads,” said Sanjay Podder, global lead for Technology Sustainability Innovation at Accenture.As the demand for AI continues to grow, sustainability through technology can play a crucial role in helping businesses meet environmental goals while driving innovation.

Sustainable chip technology innovation – purpose-built silicon

One of the most visible ways AWS is innovating for energy efficiency is through the company’s investment in AWS chips. Launched in 2018, the custom AWS-engineered general purpose processor, Graviton, was the first-of-its-kind to be deployed at scale by a major cloud provider. The latest Graviton4 offers four times the performance of Graviton, and while Graviton3 uses 60% less energy for the same performance as comparable Amazon EC2 instances (where the compute happens in a data centre), Graviton4 is even more energy efficient.

AWS customers are also benefiting from the carbon reduction potential of Graviton. Paytm, India’s leading payments and financial services distribution platform, witnessed a reduction in workload carbon intensity by adopting Graviton processors, reporting up to 47% estimated decrease in carbon emissions per transaction. Similarly, IBS Software, a leading SaaS solutions provider to the global travel industry, reported that other than improving performance and reducing cost by adopting Graviton processors, the company saw a 40% reduction in carbon emissions per instance hour.

Running generative AI applications in a more sustainable way requires innovation at the silicon level with energy efficient hardware. To optimise performance and energy consumption, AWS developed purpose-built silicon like the AWS Trainium chip and AWS Inferentia chip to achieve significantly higher throughput than comparable accelerated compute instances. AWS Trainium cuts the time taken to train generative AI models—in some cases from months to hours. This means building new models requires less money and power, with energy-consumption reductions of almost one third/up to 29%. AWS Inferentia is AWS’s most power-efficient machine learning inference chip. AWS Inferentia2 machine learning accelerator delivers up to 50% higher performance per watt and can reduce costs by up to 40% against comparable instances. These purpose-built accelerators enable AWS to efficiently execute AI models at scale. This translates to a reduced infrastructure footprint for similar workloads, resulting in enhanced performance per watt of power consumption.

Improving energy efficiency across AWS infrastructure

Through innovations in engineering—from electrical distribution to cooling techniques, AWS’s infrastructure is able to operate closer to peak energy efficiency. AWS optimises resource utilisation to minimise idle capacity, and continuously improves the efficiency of its infrastructure. For example, AWS removed the large central Uninterruptible Power Supply (UPS) from its data centre design to instead use small battery packs and custom power supplies that AWS integrates into every rack, which has improved power efficiency and has further increased availability. Every time power is converted from one voltage to another, or from AC to DC and vice versa, some power is lost in the process. By eliminating the central UPS, AWS are able to reduce these conversions. Additionally, AWS have optimised rack power supplies to reduce energy loss in that final conversion. Combined, these changes reduce energy conversion loss by about 35%.

After powering AWS’s server equipment, cooling is one of the largest sources of energy use in AWS data centres. To increase efficiency, AWS uses different cooling techniques, including free air cooling depending on the location and time of year, as well as real-time data to adapt to weather conditions. Implementing these innovative cooling strategies is more challenging on a smaller scale at a typical on-premises data centre. AWS’s latest data centre design seamlessly integrates optimised air-cooling solutions alongside liquid cooling capabilities for the most powerful AI chipsets, like the NVIDIA Grace Blackwell Superchips. This flexible, multimodal cooling design allows AWS to extract maximum performance and efficiency whether running traditional workloads or AI models.

According to the study, AWS’s additional carbon-free energy procurement in India contributes 31% in carbon emissions reduction for compute-heavy workloads and 44% for storage-heavy workloads. Aligning with Amazon's commitment to achieving net-zero carbon emissions across all operations by 2040, AWS is rapidly transitioning its global infrastructure to match electricity use with 100% carbon-free energy. Amazon met its 100% renewable energy goal seven years ahead of schedule. In India 100% of the electricity consumed by AWS data centres was matched with renewable energy sources procured in country in 2022 and 2023. This is due to Amazon’s investment in 50 renewable energy projects in India with an estimated 1.1 gigawatts of renewable energy capacity, enough energy to power more than 1.1 million homes in New Delhi each year.

Sirius Digitech, a JV of Adani and Sirius, Acquire Noida-based Coredge.io

Sirius Digitech, a JV of Adani and Sirius, Acquire Noida-based Coredge.io

Sirius Digitech, a joint venture between the Adani Group and Sirius International Holding, has acquired Noida-based Coredge.io Private Limited, announced Adani Enterprises, on Wednesday. Coredge.io is a cutting-edge sovereign AI and cloud platform company that offers secure and compliant cloud services for AI applications, ensuring data sovereignty. Coredge.io services are available across Japan, Singapore, and India.

The acquisition will enable Sirius Digitech to provide cloud services that empower organizations to leverage sovereign cloud innovations while retaining sensitive data within national borders. Coredge.io's expertise positions it as a leader in the field of sovereign cloud technology, and this move aligns with the growing demand for computation and sovereign data stack driven by artificial intelligence.

Founded as a bootstrap company in 2020, by Arif Khan – a JNTU graduate, Coredge.io has quickly expanded its client base across geographies like Japan, Singapore and India. Coredge aims to capitalize on the trillion-dollar global opportunity for sovereign cloud. Its expertise in accelerating hyper local cloud service providers with stringent data sovereignty and compliance measures has positioned it as a leader in the field.

Partnering with Sirius marks an exciting new chapter for our sovereign AI and cloud platform business, both in India and globally,” said Arif Khan, CEO of Coredge.ioTogether, we can accelerate the development and delivery of advanced AI services while upholding security, privacy and digital sovereignty principles, helping customers across the globe drive technological transformation while complying with their data ethics principles.”

Arif Khan, Founder & CEO, Coredge.io, is also the founder of ParserLab and co-founded VoerEir. Previously, he has also served as Chief Enterprise Architect in Ericsson. 

Coredge aims to build the complete solution stack for sovereign data centers that will include everything from bare metal servers to services, like Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) built on open-source technologies, to enable Sirius Digitech to provide Machine Learning as a Service (MLaaS) as applications get built on its infrastructure.

Market Reports

Market Report & Surveys
IndianWeb2.com © all rights reserved