Slider

SoftBank’s Graphcore Picks Bengaluru for £1bn AI Campus, 500 Jobs Incoming

Strategic Expansion to Power Next-Gen AI Talent and Semiconductor Innovation in India
SoftBank’s Graphcore Picks Bengaluru for £1bn AI Campus, 500 Jobs Incoming

Graphcore, now a wholly owned subsidiary of SoftBank Group, is making a major move in India with the launch of a new AI Engineering Campus in Bengaluru. Here's a quick breakdown of the announcement:

Graphcore's £1bn India Expansion
  • Location: Bengaluru, India
  • Investment: Up to £1 billion over the next 10 years
  • Jobs Created: 500 new semiconductor roles
  • Immediate Hiring: First 100 roles already open, spanning:
    • Silicon Logical Design
    • Physical Design
    • Verification
    • Characterization
    • Bring-up
This campus will be central to Graphcore’s efforts in building next-generation AI computing infrastructure, aligning with SoftBank’s broader ambition to lead in Artificial Super Intelligence platforms.

Graphcore: AI Chip Innovator

  • Founded: 2016 in Bristol, UK by Nigel Toon and Simon Knowles
  • Industry: Semiconductors and AI hardware
  • Core Product: Intelligence Processing Unit (IPU) — a novel processor architecture designed specifically for machine learning workloads
  • Mission: To enable innovators to build next-generation AI applications and democratize access to machine intelligence
  • Ownership: Now a wholly owned subsidiary of SoftBank Group Corp, continuing to operate under the Graphcore name
Graphcore competes with companies like Nvidia in the AI compute space, and its IPU architecture is known for handling entire ML models inside the processor — a departure from traditional GPU-based systems.

AI Accelerator Comparison (2025) 

Feature / Chip Graphcore IPU (Bow-200) Nvidia Blackwell B200 GPU Google TPU v6e (Trillium) AMD MI350 GPU
Architecture Massively parallel IPU tiles GPU with Transformer Engine Custom ASIC for ML workloads GPU with unified memory
Memory 900 MB per IPU tile 180 GB HBM3e per GPU 32 GB HBM per chip 128 GB HBM3e
Bandwidth ~1.5 TB/s (system level) Up to 8 TB/s 1.6 TB/s per chip ~5.2 TB/s
Compute (FP16) ~350 TFLOPS (system level) 4.5 PFLOPS 918 TFLOPS BF16 ~2.5 PFLOPS
Compute (INT8) Not optimized 9 PFLOPS 1.836 PFLOPS ~5 PFLOPS
Scalability 3D wafer-scale IPU pods DGX B200 clusters 256-chip TPU pods MI350 clusters
Target Workloads Sparse ML, graph networks Transformer-based LLMs Large-scale ML training HPC + AI inference
Power Efficiency High for sparse workloads Improved over H100 Optimized for datacenter Competitive with Nvidia
Deployment Graphcore IPU systems Nvidia DGX platforms Google Cloud TPU pods Enterprise GPU servers

Like this content? Sign up for our daily newsletter to get latest updates. or Join Our WhatsApp Channel
0

No comments

both, mystorymag

Market Reports

Market Report & Surveys
IndianWeb2.com © all rights reserved