Showing posts with label Hardware. Show all posts
Showing posts with label Hardware. Show all posts

Cisco Unveils Quantum Switch Prototype, Paving Way for Scalable Quantum Networks

Cisco Unveils Quantum Switch Prototype, Paving Way for Scalable Quantum Networks

Cisco has unveiled its Universal Quantum Switch prototype, a breakthrough in quantum networking that enables seamless connectivity between quantum systems from different vendors, operating at room temperature over standard telecom fiber. This marks a critical step toward scalable quantum networks with less than 4% loss in fidelity.

Key Highlights of Cisco’s Quantum Switch

  • Launch Date: April 23, 2026
  • Prototype Name: Cisco Universal Quantum Switch
  • Function: Routes quantum information between systems while preserving encoding and entanglement fidelity
  • Performance: Proof-of-concept experiments showed <4% degradation in quantum information fidelity
  • Compatibility: Works across all major encoding modalities (polarization, time-bin, frequency-bin, path)
  • Operating Conditions: Functions at room temperature using standard telecom fiber

Why This Matters

  • Scalability: Networking bridges the gap from hundreds to millions of qubits
  • Vendor Interoperability: Eliminates compatibility issues across manufacturers
  • Cost Efficiency: Room-temperature operation reduces infrastructure expense
  • Applications: Healthcare, finance, aerospace simulations

Comparative Context

FeatureCisco Quantum SwitchConventional Quantum Hardware
Encoding CompatibilityUniversal (all major modalities)Limited, vendor-specific
Fidelity Loss<4%Often >10%
Operating TemperatureRoom temperatureCryogenic cooling required
InfrastructureStandard telecom fiberSpecialized quantum channels
Business ReadinessPrototype stage, scalable visionEarly-stage, fragmented systems

Risks & Challenges

  • Prototype Stage: Commercial deployment may take years
  • Integration Complexity: Businesses must adapt infrastructure
  • Regulatory Framework: Standards still evolving globally
  • Security Concerns: Quantum encryption requires rigorous testing

Outlook

Cisco’s Universal Quantum Switch is a milestone in building the “network layer” for the quantum era, enabling interoperability and scalability. For India, such innovations could accelerate adoption in defense, healthcare, and fintech sectors.



Understanding Cisco’s Universal Quantum Switch

Simple Explanation for Everyone

  • Like electricity adapters: Just as you need a plug adapter when traveling, quantum computers need a “switch” to connect when they use different methods.
  • Different languages: Quantum computers speak in different formats (polarization, timing, frequency). Until now, they couldn’t easily connect.
  • Universal connector: Cisco’s switch safely converts quantum information between formats without breaking its delicate properties.
  • Room temperature: Works on standard telecom fiber without expensive cooling systems.

Everyday Analogy

Imagine friends speaking English, Hindi, and Japanese. Normally, they can’t understand each other. The Universal Quantum Switch is like a live translator that instantly converts their words so everyone can communicate smoothly — but instead of languages, it’s converting quantum information.

For a common person: Cisco’s Universal Quantum Switch is the “router” of the quantum era — it connects different quantum computers together, just like today’s internet connects laptops and phones. It’s still a prototype, but it’s a big step toward building a global quantum internet.

NetApp Unveils New High-Performance EF-Series Models

NetApp Unveils New High-Performance EF-Series Models

NetApp® (NASDAQ: NTAP), the Intelligent Data Infrastructure company, today announced the release of the next generation of NetApp EF-Series storage systems, built to power the most performance‑intensive workloads at scale. The introduction of EF50 and EF80 helps enterprises and neoclouds meet the growing demands of AI, high-performance computing (HPC), and transactional databases, including in emerging use cases like sovereign AI clouds and AI-powered manufacturing.

Data is the key component to delivering business value for enterprises, underpinning performance hungry workloads like AI or databases,” said Sandeep Singh, Senior Vice President and General Manager of Enterprise Storage at NetApp. As businesses contend with ever-increasing data volumes and performance-intensive applications such as AI model training, AI inferencing and high-performance computing, they need infrastructure that delivers speed, scalability and efficiency without added complexity. NetApp delivers a comprehensive portfolio that addresses every stage of the AI data pipeline from collecting and preparing data to feeding it to GenAI models that produce business insights. With the new EF-Series systems, purpose-built for extreme performance, we’re enabling customers to deploy and scale high-throughput, low-latency workloads quickly and efficiently, while reducing data center footprint and operational overhead.”

Coupled with high-performance parallel file systems like Lustre or BeeGFS, the new EF50 and EF80 systems accelerate HPC simulations and keep GPUs fully utilized with high-performance scratch space, helping organizations unlock new value and competitive advantage at the right price. Customers ranging from neocloud providers driving AI innovation to movie studios managing massive media libraries will benefit from not only performance and scalability but also robust security measures to safeguard sensitive information and prevent data loss.

The new EF-Series storage systems deliver over 110GBps of read throughput and 55GBps of write throughput, a 250 percent improvement over previous generations. With a power efficiency of 63.7GBps per KW, and 1.5PB of storage in 2U, the new systems ensure reliable, high-performance with efficient rack density and an affordable cost. With EF-Series, organizations can:
  • Achieve high performance for data-intensive workloads, scaling capacity without sacrificing efficiency or latency
  • Balance budget requirements with high-performance needs to make better decisions faster
  • Simplify management and operational complexity with affordable block storage, streamlined deployment and support from NetApp’s technical experts.
As we navigate the AI era, many enterprises are finding that they need to maximize their raw performance to extract the most value from their data,” said Clayton Vipond, Senior Solution Architect at CDW.The refreshed NetApp EF-Series delivers the throughput and capacity businesses need to scale high-powered workloads that transform data into insights and outcomes.”

"NetApp's EF-Series systems give Teradata the storage performance needed to support our most demanding workloads," said Sumeet Arora, Chief Product Officer, Teradata. "We appreciate that NetApp continues to invest in this technology, and with the enhanced performance of the new models, we look forward to exploring opportunities to reduce infrastructure complexity and support the AI and data modernization initiatives our customers care about."

By delivering a high-performance storage system that supports parallel file systems like Lustre and BeeGFS, NetApp is making its mark as emerging industries, such as neocloud, emerge to support the AI-Era,” said Simon Robinson, Principal Analyst at Omdia. “Our research validates that AI workloads require a level of raw performance unmatched by any mainstream business workload to date. With the new EF-series systems, NetApp is delivering a solution that addresses the performance needs of large-scale AI projects, whether model training or inference.”

The updated EF-Series builds on decades of durability and reliability from NetApp. With more than 1 million installations, it has a proven track record that customers can rely on.

Additional Resources

IISc Unveils Quantum-Safe Crypto Chip Powering IoT Security

IISc Unveils Quantum-Safe Crypto Chip Powering IoT Security

Researchers at the Indian Institute of Science (IISc), Bengaluru, have announced that they have developed the first compact, low-power quantum-safe crypto chip for IoT devices, using the SQIsign digital signature scheme. This breakthrough addresses the looming quantum threat while keeping energy consumption and hardware size suitable for constrained IoT systems.

Why This Matters

  • Quantum threat: Future quantum computers could break widely used cryptographic systems like RSA and ECC.
  • IoT vulnerability: Billions of IoT devices rely on lightweight cryptography, making them especially exposed.
  • Quantum-safe solution: Post-quantum cryptography (PQC) algorithms like SQIsign are designed to resist quantum attacks.

Key Features of the IISc Chip

  • Custom ASIC design: First hardware-accelerated implementation of SQIsign signatures.
  • Low power consumption: Optimized for IoT devices with limited battery and processing capacity.
  • Compact size: Suitable for embedding in sensors, wearables, and industrial IoT nodes.
  • Efficient signature verification: Hardware acceleration reduces computational overhead compared to software-only PQC.

SQIsign Digital Signature Scheme

  • Isogeny-based cryptography: Relies on mathematical structures resistant to quantum algorithms.
  • Candidate for PQC standards: Being evaluated by global cryptographic bodies for standardization.
  • Advantages: Smaller key sizes compared to lattice-based PQC, making it more efficient for constrained devices.

Comparison: Traditional vs Quantum-Safe IoT Security

Feature Traditional Crypto (RSA/ECC) Quantum-Safe (SQIsign ASIC)
Security vs Quantum Vulnerable Resistant
Key Size Small Moderate (but optimized)
Power Consumption Low Low (hardware-accelerated)
Suitability for IoT High (today) High (future-proof)
Standardization Status Mature Under evaluation

Challenges & Risks

  • Standardization pending: PQC algorithms are still under review; widespread adoption will take time.
  • Integration hurdles: IoT manufacturers must redesign hardware/software stacks to support PQC.
  • Performance trade-offs: Even with hardware acceleration, PQC can be heavier than legacy crypto.

Outlook

  • Near-term: Pilot deployments in IoT devices, especially in critical infrastructure and healthcare.
  • Medium-term (5–10 years): Gradual replacement of RSA/ECC in IoT ecosystems.
  • Long-term: Quantum-safe chips become standard in all connected devices.

Motivair by Schneider Electric Introduces Scalable 2.5MW Cooling Solution for GPUs & AI Workloads

Motivair by Schneider Electric Introduces Scalable 2.5MW Cooling Solution for GPUs & AI Workloads

Motivair by Schneider Electric, a leading innovator in liquid cooling technology for digital infrastructure, today introduced a new, industry-leading 2.5MW Coolant Distribution Unit (CDU) designed to cool high-density data centers reliably, at scale.

The MCDU-70 is the highest-capacity CDU available from Motivair, and presents a breakthrough flexible and scalable solution for meeting the rigorous demands of next-generation GPUs (Graphics Processing Unit) and gigawatt-scale AI Factories.

Utilizing Schneider Electric’s EcoStruxure software, Motivair’s CDUs operate as a centralized system—meeting today’s cooling requirements with the ability to scale to 10MW+ for next-gen HPC, AI and accelerated computing workloads.


Compact and efficient, the MCDU-70 is the newest addition to Motivair’s CDU line, providing cooling power without compromise by preserving full flow performance and facility pressure at gigawatt scale. Its capacity aligns perfectly with the needs of large-scale facilities, such as NVIDIA Omniverse DSX Blueprint, where deployments target 10MW to reach gigawatt scale. At 2.5 MW each, six MCDU-70s can provide a 4+2 redundancy for these designs, and the unit’s capacity is fit to service NVIDIA’s GPU roadmap for the foreseeable future.

AI isn’t slowing down. Our solutions are designed to keep pace with chip and silicon evolution—delivering next-gen performance when it matters most,” said Rich Whitmore, CEO & President of Motivair by Schneider Electric. “Data center success now hinges on delivering scalable, reliable, efficient infrastructure solutions that match the next generation of AI Factory deployments. We’re meeting that moment with proven liquid cooling solutions that scale with our customers’ needs.

Commenting on the broader impact of next-generation cooling infrastructure, Mr. Venkataraman Swaminathan, Vice President – Secure Power, Greater India, Schneider Electric, said, “As AI-driven digital infrastructure continues to scale in both capacity and complexity, resilient and energy-efficient power and cooling solutions are becoming foundational to data center design. Schneider Electric is focused on enabling future-ready digital infrastructure that delivers reliability and performance at scale, while supporting global sustainability and efficiency goals.

With the addition of the MCDU-70, Schneider Electric’s end-to-end liquid cooling portfolio, now offers CDUs ranging from 105kW to 2.5MW, meeting current and future performance requirements. Each CDU is scalable and integrates seamlessly with other units and Schneider Electric’s software to deliver precise and reliable cooling capacity for data center operators. The MCDU-70 is now available to order globally via Schneider Electric’s advanced manufacturing hubs in North America, Europe and Asia. For more information, visit the website.

Altos India Unveils Next-Gen AI Server to Accelerate Enterprise Transformation

Altos India, a subsidiary of Acer Inc., today announced the launch of its latest AI server, Altos BrainSphere™ R680 F7, designed to power the next wave of enterprise AI deployments across India. The new server will be officially available for shipment from September and is engineered to meet the surging demand from enterprises, government, education, and healthcare sectors for high-performance, scalable, and future-ready AI infrastructure.

Altos India Unveils Next-Gen AI Server to Accelerate Enterprise Transformation

The Altos BrainSphere™ R680 F7 supports up to 8 NVIDIA GPUs, including RTX PRO 6000 Blackwell Server Edition, H200 NVL, and L40S, along with 6th-generation Intel Xeon processors and DDR5 memory. Its flexible PCIe architecture allows seamless integration of diverse NVIDIA accelerated computing technologies, making it ideal for AI inference, AI model optimization, data analytics, industrial AI, virtualization, visual compute, as well as specialized applications such as AI medical imaging, smart security, and personalized recommendations. This makes it a powerful platform for rapidly building diversified AI workload environments.

The server can also be paired with Altos aiWorks, an AI computing platform that integrates hardware and software resources to simplify deployment, accelerate time-to-market, and support hybrid workload capabilities.

Altos aiWorks now includes advanced functionalities such as model deployment, job scheduling, resource monitoring, inference process management, and comes preloaded with multiple mainstream AI frameworks and models. It also integrates NVIDIA AI Enterprise software, including NIM microservices, enabling enterprises to build scalable, visualized, open, and highly available AI environments tailored to their business needs.

The Altos BrainSphere™ R680 F7 represents a significant milestone in our mission to bring future-ready AI solutions to Indian enterprises,” said Harish Kohli, President and Managing Director, Acer India. “As AI adoption accelerates across industries, organizations are seeking infrastructure that combines scalability, performance, and ease of deployment. With this launch, Altos India is well-positioned to empower businesses, government, and academia with computing solutions that will shape the next phase of India’s digital transformation.”

This launch further strengthens Altos India’s positioning as a trusted partner for enterprise computing, delivering solutions that align with the country’s growing focus on AI-driven innovation and digital-first growth.

Seagate Introduces Hard Drive Capacities of Up to 36TB, Extending Its HAMR-Based Mozaic 3+ Technology Platform

Seagate Introduces Hard Drive Capacities of Up to 36TB, Extending Its HAMR-Based Mozaic 3+ Technology Platform

Seagate Technology Holdings plc (NASDAQ: STX), an innovator of mass capacity data storage, today announced shipments of Exos M hard drive samples to select customers in industry-leading capacities up to 36 terabytes (TB). Based on Mozaic 3+, the company’s breakthrough heat-assisted magnetic recording (HAMR) technology platform, Exos M delivers unprecedented storage scale for large-scale data center deployments.

Key Takeaways

Adopted by Cloud Service Providers: Seagate is currently ramping Exos M to volume shipments on capacity points up to 32TB with a leading cloud service provider. Separately, Seagate is also sampling drives on the Exos M platform of up to 36TB.

Mozaic 3+ and HAMR Innovation: Based on Seagate’s Mozaic 3+ technology platform, the industry’s first implementation of heat-assisted magnetic recording (HAMR), Exos M offers data center operators significant scale, total cost of ownership (TCO), and sustainability advantages, including 300% more storage capacity within the same data center footprint, a 25% cost reduction per terabyte and 60% reduction in power consumption per terabyte.¹

Unrivalled Areal Density: Exos M, powered by HAMR-based Mozaic3 + platform, now delivers capacity points up to 36TB through a highly efficient 10-platter product design. Seagate is the only data storage company that can achieve areal densities of 3.6TB per hard drive platter today, with a pathway to increasing per-platter capacity to 10TB.

Infrastructure solutions provider Dell Technologies is among the first customers to adopt Mozaic3+ and will soon integrate Exos M 32TB into their high-density storage systems. 

As customers build out their AI factories, they need cost-efficient, scalable and flexible storage engineered to reliably handle the most demanding AI workloads,” said Travis Vigil, SVP, ISG Product Management. “Dell PowerScale with Seagate’s HAMR-enabled Mozaic 3+ technology plays a crucial role in supporting AI use cases like retrieval augmented generation (RAG), inferencing and agentic workflows. Together, Dell Technologies and Seagate are setting the standard for industry-leading AI storage innovation.

Dave Mosley, Seagate CEO, said:
We’re in the midst of a seismic shift in the way data is stored and managed. Unprecedented levels of data creation – due to continued cloud expansion and early AI adoption – demand long-term data retention and access to ensure trustworthy data-driven outcomes. From capturing training checkpoints to archiving source-data sets, the more data organizations retain, the more they can validate that their applications are acting as they expect them to – and adjust course as needed.

"Seagate continues to lead in areal density, sampling drives on the Exos M platform of up to 36TB today. Also, we’re executing on our innovation roadmap, having now successfully demonstrated capacities of over 6TB per disk within our test lab environments."

As the world’s leading producer of exabytes, and the only manufacturer capable of manufacturing 3.6TB per platter hard drives at scale, Seagate is laser-focused on delivering the storage scale required for the applications of the future,” Mosley added.

“As businesses and people everywhere continue to use AI applications, more widespread adoption of AI is creating unprecedented amounts of data. All this data needs to be replicated and retained for the long term,” said Kuba Stolarski, Research Vice President for service provider infrastructure with analyst firm IDC. “Our research shows that hard drives continue to be a critical technology for delivering this scale, with 89% of data stored in the data centers of leading cloud service providers stored via hard drive. We believe Seagate’s progress in areal density innovation positions them well to address increased demand for data storage.”

To learn more about Exos M and the Mozaic 3+ platform, visit Seagate.com.

Seagate is a leader in mass-capacity data storage. We have delivered more than four and a half billion terabytes of capacity over the past four decades. We make storage that scales, bringing trust and integrity to innovations that depend on data. In an era of unprecedented creation, Seagate stores infinite potential. To learn more about how Seagate leads storage innovation, visit www.seagate.com and our blog, or follow us on X, Facebook, LinkedIn, and YouTube.

Footnote: ¹Method: 10TB to 30TB capacity upgrade when comparing Exos X10 to Exos M 30TB Mozaic drive, a common drive capacity needing upgrade at data centers today.

©2025 Seagate Technology LLC. All rights reserved. Seagate, Seagate Technology, Mozaic 3+, Exos, and the Spiral logo are trademarks or registered trademarks of Seagate Technology LLC in the United States and/or other countries. All other trademarks or registered trademarks are the property of their respective owners. When referring to drive capacity, one gigabyte, or GB, equals one billion bytes, one terabyte, or TB, equals one trillion bytes, and one exabyte, or EB, equals one quintillion bytes.

NVIDIA Launches Generative AI Computer at Affordable Price

NVIDIA Launches Generative AI Computer at Affordable Price

NVIDIA recently unveiled its most affordable generative Al supercomputer, the Jetson Orin Nano Super Developer Kit. Priced at $249 (approximately ₹ 21,146), making it accessible for students, enthusiasts, and developers.

The Jetson Orin Nano Super Developer Kit by NVIDIA is a game-changer for those looking to dive into generative AI on a budget. It's designed to support popular generative AI models, which means you can experiment with cutting-edge AI right out of the box.

Jetson Orin Nano Super is an ideal solution for creating LLM chatbots based on retrieval-augmented generation, building a visual AI agent, or deploying AI-based robots.

It offers up to 67 trillion operations per second (TOPS) of Al performance, which is a significant improvement over its predecessor. It supports popular generative AI models and is ideal for advanced robotics applications, vision AI, and various Internet of Things (IoT) devices.
 
Jetson Orin Nano Super
Jetson Orin Nano Super

The Jetson Orin Nano Super is compact enough to fit in the palm of your hand. The new software update has boosted its performance from 40 TOPS to 67 Tops.

NVIDIA Launches Generative AI Computer at Affordable Price

The predecessor to the Jetson Orin Nano Super Developer Kit is the Jetson Orin Nano Developer Kit, launched in 2022. It offered 40 trillion operations per second (TOPS) of AI performance. The new version offers 67 TOPS, which is a 1.7x improvement over its predecessor.

The new version, Jetson Orin Nano Super Developer Kit, also benefits from software updates that enhance generative AI performance, making it a more powerful and cost-effective option for developers, students, and hobbyists.

NVIDIA CEO Jensen Huang mentioned that this supercomputer can run everything that their Hyperscale Graphics Extension (HGX) does, including large language models.

The developer kit consists of a Jetson Orin Nano 8GB system-on-module (SoM) and a reference carrier board, providing an ideal platform for prototyping edge AI applications.

Jetson runs NVIDIA AI software including NVIDIA Isaac for robotics, NVIDIA Metropolis for vision AI and NVIDIA Holoscan for sensor processing. Development time can be reduced with NVIDIA Omniverse Replicator for synthetic data generation and NVIDIA TAO Toolkit for fine-tuning pretrained AI models from the NGC catalog.


NVIDIA’s focus on affordability and performance with the Jetson Orin Nano Super opens up generative AI development to a broader audience, from hobbyists to students and professionals. This democratization of AI technology can lead to rapid innovation and development in various fields.

Apple Developing Its Own In-House Modem for iPhone, To Reduce Dependence on Qualcomm

Apple Developing Its Own In-House Modem for iPhone, To Reduce Dependence on Qualcomm

In a significant move towards reducing its dependence on third-party suppliers, Apple has embarked on an ambitious project to design and develop its own in-house cellular modem. This strategic decision is expected to have far-reaching implications for the tech giant's future products, particularly the iPhone.

For years, Apple has relied on Qualcomm, a leading modem manufacturer, to provide the crucial component that enables iPhones to connect to cellular networks. For example, the iPhone 13 Pro features the Qualcomm X60 modem.

However, Apple-Qualcomm partnership has been marred by legal disputes and supply chain constraints.

By developing its own modem, Apple aims to:

1. Enhance Performance: An in-house modem will allow Apple to optimize the component's performance, leading to faster data speeds, better battery life, and improved overall user experience.

2. Reduce Dependence: By controlling its own modem design, Apple will no longer be subject to Qualcomm's pricing and supply whims, ensuring a more stable and predictable component pipeline.

3. Foster Innovation: An in-house modem will give Apple the freedom to experiment with new technologies and features, driving innovation and differentiation in its products.

Challenges

While the benefits are clear, developing a cutting-edge modem is a complex task. Apple faces significant technical challenges, including:

1. Engineering Expertise: Modem design requires specialized knowledge and expertise, which Apple will need to acquire or develop in-house.

2. Testing and Validation: Ensuring the modem meets stringent industry standards and works seamlessly across various networks and regions will be a daunting task.

3. Supply Chain Management: Apple will need to establish a reliable supply chain to manufacture and deliver the modems in large quantities.

Timeline and Implications

Apple's in-house modem is expected to debut in 2025, starting with niche iPhone models. However, widespread adoption across all iPhone models may take several years. This development will have significant implications for:

1. Qualcomm: The loss of Apple's business will likely impact Qualcomm's revenue and market share.

2. Competitors: Other smartphone manufacturers may follow Apple's lead, leading to a shift in the modem market landscape.

3. 5G Advancements: Apple's in-house modem could accelerate the development and adoption of 5G technology, driving innovation in areas like IoT, AR, and more.

Overall, Apple's decision to develop its own in-house modem marks a significant milestone in the company's pursuit of self-reliance. While challenges lie ahead, the potential benefits of enhanced performance, reduced dependence, and fostered innovation make this a strategic move worth undertaking.

On the other side, Qualcomm might face increased pressure to innovate and offer more competitive products to retain its market position.

Top Image – Neway.mobi

Intel Launches 1st Integrated Optical I/O Chiplet for Future Computing

Intel Launches 1st Integrated Optical I/O Chiplet for Future Computing

Intel Corporation has achieved a significant milestone in integrated photonics technology. At the Optical Fiber Communication Conference (OFC) 2024, Intel's Integrated Photonics Solutions (IPS) Group demonstrated the industry's first fully integrated optical compute interconnect (OCI) chiplet co-packaged with an Intel CPU and running live data.

This OCI chiplet represents a leap forward in high-bandwidth interconnect, enabling co-packaged optical input/output (I/O) in emerging AI infrastructure for data centers and high-performance computing applications.

The OCI chiplet is designed to support 64 channels of 32 gigabits per second (Gbps) data transmission in each direction on up to 100 meters of fiber optics. The OCI chiplet addresses AI infrastructure's growing demands for higher bandwidth, lower power consumption, and longer reach, making it a crucial advancement for future computing platforms.

OCI Chiplets Vs Traditional Electrical Interconnects

Intel Launches 1st Integrated Optical I/O Chiplet for Future Computing

OCI chiplets offer significantly higher bandwidth compared to electrical interconnects. With 64 channels of 32 Gbps data transmission in each direction, they provide ample capacity for data-intensive workloads. Traditional electrical interconnects, such as copper-based traces on circuit boards, have limited bandwidth and can become bottlenecks in high-performance computing systems.

Moreover, OCI chiplets consume less power per bit transmitted. Optical signals experience minimal resistance and don't generate heat like electrical currents do. Electrical interconnects suffer from power losses due to resistance, leading to higher energy consumption.

OCI chiplets support longer reach—up to 100 meters of fiber optics—making them suitable for large-scale data centers. Electrical interconnects are limited by signal degradation over distance, especially at high speeds.

Optical signals travel at the speed of light, resulting in lower latency compared to electrical signals. Electrical interconnects introduce additional latency due to signal propagation delays.

Above all, OCI chiplets are immune to EMI, making them ideal for noisy environments. While, Electrical interconnects can suffer from EMI-induced signal degradation.

In summary, OCI chiplets offer superior performance in terms of bandwidth, power efficiency, reach, and latency, making them a promising solution for future computing systems.

OCI chiplets for AI-based Applications

With recent developments in large language models (LLM) and generative AI are accelerating the trend of AI applications. Larger and more efficient machine learning (ML) models will play a key role in addressing the emerging requirements of AI acceleration workloads. The need to scale future computing platforms for AI is driving exponential growth in I/O bandwidth and longer reach to support larger processing unit (CPU/GPU/IPU) clusters and architectures with more efficient resource utilization, such as xPU disaggregation and memory pooling.

Electrical I/O (i.e., copper trace connectivity) supports high bandwidth density and low power, but only offers short reaches of about one meter or less. Pluggable optical transceiver modules used in data centers and early AI clusters can increase reach at cost and power levels that are not sustainable with the scaling requirements of AI workloads. A co-packaged xPU optical I/O solution can support higher bandwidths with improved power efficiency, low latency and longer reach – exactly what AI/ML infrastructure scaling requires.

Cisco and AT&T Announce New Hassle-Free Digital Buying Experience for Cisco’s Newest FWA Devices

Cisco and AT&T Announce New Hassle-Free Digital Buying Experience for Cisco’s Newest FWA Devices

AT&T and Cisco have announced a new initiative to streamline the deployment of 5G Fixed Wireless Access (FWA) for businesses. This collaboration introduces a digital buying experience that simplifies the process for businesses to quickly extend connectivity across diverse campus and branch office environments.

The initiative features Cisco’s newest line of cellular gateways, the Meraki MG52 and MG52E, which are the first true-5G, Standalone (SA) capable, discreet FWA devices to offer cloud-managed eSIM technology powered by Cisco IoT Control Center¹. These devices are designed to be paired exclusively with an integrated wireless WAN service experience and zero-touch, instant-on provisioning from AT&T.

Businesses can benefit from:
  • Increased operational efficiency with scalable and flexible management.
  • Minimized downtime and disruptions with seamless, resilient network performance.
  • Greater return on investment with long-lasting, durably designed FWA devices.
  • The flexibility to deploy in difficult-to-reach locations.
  • The ability to deploy branch sites within minutes with instant-on, built-in AT&T 5G connectivity.
With a complimentary 30-day introductory period, businesses can reduce deployment times for branch connectivity through self-service purchasing of AT&T data plans directly from the Cisco Meraki dashboard¹. This offer aims to help businesses lower operational costs, increase scalability, and accelerate provisioning of full-stack Cisco networks to "day-zero" operations. 

IBM Introduces New Power® S1012 Server to Run AI Inferencing Workloads in ROBO Locations Outside Mainstream Datacenter Facilities

IBM Introduces New Power® S1012 Server to Run AI Inferencing Workloads in ROBO Locations Outside Mainstream Datacenter Facilities

IBM has recently announced the expansion of its server portfolio with the introduction of the IBM Power S1012. This new server is designed to enhance AI workloads from the core to the cloud and even to the edge, providing added business value across various industries. The IBM Power S1012 is a 1-socket, half-wide system based on the Power10 processor, and it delivers up to 3X more performance per core compared to the previous Power S812 model.

The server is available in both a 2U rack-mounted and a tower deskside form factor, optimized for edge-level computing. It also offers the lowest entry price point in the Power portfolio to run core workloads for small and medium-sized organizations¹. With the ability to run AI inferencing workloads in remote office and back office locations, the IBM Power S1012 provides flexibility and direct connection to cloud services like the IBM Power Virtual Server for backup and disaster recovery.

This advancement is particularly significant for industries such as retail, manufacturing, healthcare, and transportation, where deploying workloads at the edge can capitalize on data where it originates. Real-time insights gained through edge computing can offer a competitive advantage, with applications ranging from analyzing customer behavior to monitoring production processes.

IBM Power S1012 will be generally available from IBM and certified Business Partners on June 14, 2024.

Use Case

Let's consider a retail industry scenario as an example use case for edge computing with the IBM Power S1012:

Scenario: Real-Time Inventory Management in Retail

Background:

A large retail chain is looking to improve its inventory management and customer experience by implementing AI at the edge.

Challenge:

The retailer needs to process vast amounts of data from various sources, including point-of-sale systems, online transactions, and IoT sensors on shelves. They require real-time analytics to manage stock levels efficiently and predict future demand.

Solution with IBM Power S1012:

Local Processing: The IBM Power S1012 can be deployed in individual stores to process data locally. This reduces latency, as data doesn't need to be sent to a central data center or cloud for processing.

AI Workloads: The server's AI capabilities enable it to run sophisticated algorithms that analyze purchasing patterns and inventory levels.

Predictive Analytics: By leveraging machine learning models, the Power S1012 can forecast demand and suggest optimal stock replenishment schedules.

Integration with IoT: IoT sensors on shelves send data directly to the local Power S1012 server, which can trigger alerts when items are running low or when there's a mismatch in inventory.

Benefits:

Reduced Latency: Real-time processing at the edge ensures immediate insights into inventory levels, leading to quicker decision-making.

Increased Efficiency: Automated stock management reduces overstocking and stockouts, saving costs and improving customer satisfaction.

Enhanced Customer Experience: The system can also provide personalized recommendations to customers based on their shopping history and current in-store promotions.

By utilizing the IBM Power S1012 for edge computing, the retailer can achieve a more responsive and intelligent inventory management system that adapts to changing demands and enhances the overall shopping experience for customers.

Intel Creates Two New AI Initiatives for AI PC Software Developers and Hardware Vendors

Intel Creates Two New AI Initiatives for AI PC Software Developers and Hardware Vendors

Intel has recently launched two new initiatives as part of their AI PC Acceleration Program: the AI PC Developer Program and the inclusion of independent hardware vendors (IHVs) in the program. These initiatives aim to optimize and maximize AI on Intel-based AI PCs, targeting over 100 million devices by 2025.

The AI PC Developer Program is tailored for software developers and ISVs to simplify the adoption of AI technologies. It offers access to tools, workflows, AI-deployment frameworks, and developer kits featuring the latest Intel hardware, including the Intel® Core™ Ultra processor.

For IHVs, the program provides opportunities to prepare and optimize their hardware for Intel AI PCs. They gain access to Intel's Open Labs for technical support and co-engineering during the development phase, ensuring their technology is efficient at launch.

This expansion signifies Intel's commitment to fostering a broad ecosystem for AI development and performance enhancement on PCs.

Developers can join Intel’s AI PC Acceleration Program and learn more about Intel’s global partner network that is working to maximize AI performance in the PC industry.

IHV and developers can register to join the AI Acceleration Program for IHVs. Intel is working with its hardware partners to innovate and lift the AI PC experience to new heights. Join Intel on the journey to accelerate the innovation.

Intel offers a wide variety of toolkits for AI developers to leverage and is bringing over 300 AI-accelerated features to market through 2024 with Intel Core Ultra processors across 230 designs from 12 global original equipment manufacturers.

The AI PC Acceleration Program, announced in October 2023, aims to connect independent hardware vendors and independent software vendors with Intel resources including artificial intelligence toolchains, training, co-engineering, software optimization, hardware, design resources, technical expertise, co-marketing and sales opportunities.

For additional information on Intel’s AI PC, go to Intel's AI PC page.


NVIDIA’s New Ethernet Networking Platform for AI Available Soon

NVIDIA’s New Ethernet Networking Platform for AI Available Soon

Dell Technologies, Hewlett Packard Enterprise and Lenovo First To Integrate NVIDIA’s New Ethernet Networking Tech for AI

End-to-End Platform Features Latest NVIDIA Spectrum-X Networking, Provides Foundation for Customers to Transform Business With AI

NVIDIA announced that Dell Technologies, Hewlett Packard Enterprise and Lenovo will be the first to integrate NVIDIA Spectrum-X™ Ethernet networking technologies for AI into their server lineups to help enterprise customers speed up generative AI workloads.

Purpose-built for generative AI, Spectrum-X offers enterprises a new class of Ethernet networking that can achieve 1.6x higher networking performance for AI communication versus traditional Ethernet offerings.

The new systems coming from three of the top system makers bring together Spectrum-X with NVIDIA Tensor Core GPUs, NVIDIA AI Enterprise software and NVIDIA AI Workbench software to provide enterprises the building blocks to transform their businesses with generative AI.

NVIDIA’s New Ethernet Networking Platform for AI Available Soon

Generative AI and accelerated computing are driving a generational transition as enterprises upgrade their data centers to serve these workloads,” said Jensen Huang, founder and CEO of NVIDIA. “Accelerated networking is the catalyst for a new wave of systems from NVIDIA’s leading server manufacturer partners to speed the shift to the era of generative AI.”

Accelerated computing and networking are key to building systems to meet the demands of large language models and generative AI applications,” said Michael Dell, chairman and CEO of Dell Technologies. “Through our collaboration, Dell Technologies and NVIDIA are providing customers with the infrastructure and software needed to quickly and securely extract intelligence from their data.”

Generative AI will undoubtedly drive innovation across multiple industries,” said Antonio Neri, president and CEO of HPE. “These powerful new applications will require a fundamentally different architecture to support a variety of dynamic workloads. To enable customers to realize the full potential of generative AI, HPE is partnering with NVIDIA to build systems with the required power, efficiency and scalability to support these applications.”

Generative AI can power unprecedented transformation but places unprecedented demands on enterprise infrastructure,” said Yuanqing Yang, chairman and CEO of Lenovo. “Working closely with NVIDIA, Lenovo is building efficient, accelerated systems with the networking, computing and software needed to power modern AI applications.”

Networking Purpose-Built to Accelerate AI

For peak AI workload efficiency, Spectrum-X combines the extreme performance of the Spectrum-4 Ethernet switch; the NVIDIA BlueField®-3 SuperNIC, a new class of network accelerators for supercharging hyperscale AI workloads; as well as acceleration software. Spectrum-X complements BlueField-3 DPUs, the world’s most advanced infrastructure computing platform.

Spectrum-4 is the world’s first 51Tb/sec Ethernet switch for AI, providing highly effective data throughput at scale and under load while minimizing network congestion for multi-tenant, AI cloud workloads. Its intelligent, fine-tuned routing technology enables maximum utilization of network infrastructure at all times.

BlueField-3 SuperNICs are designed for network-intensive, massively parallel computing, offering up to 400Gb/s RDMA over Converged Ethernet (RoCE) network connectivity between GPU servers and boosting performance for AI training and inference traffic on the east-west network inside the cluster. They also enable secure, multi-tenant data center environments, ensuring deterministic and isolated performance between tenant jobs. Boasting a power-efficient, half-height, half-length PCIe form factor, BlueField-3 SuperNICs are ideal for enterprise-class servers.

Acceleration software powering Spectrum-X features NVIDIA software development kits such as Cumulus Linux, Pure SONiC and NetQ — which together drive the platform’s breakthrough performance — and the NVIDIA DOCA™ software framework, which is at the heart of BlueField.

NVIDIA AI Enterprise provides frameworks, pretrained models and development tools for secure, stable and supported production AI. NVIDIA AI Workbench allows developers to quickly create, test and customize pretrained generative AI models on a PC or workstation — then scale them to virtually any data center or cloud.

NVIDIA Israel-1 Supercomputer Powered by Spectrum-X

Spectrum-X also enables the NVIDIA Israel-1 supercomputer, a reference architecture for next-generation AI systems. Israel-1 is a collaboration with Dell Technologies, using Dell PowerEdge XE9680 servers powered by the NVIDIA HGX™ H100 eight-GPU platform and BlueField-3 DPUs and SuperNICs with Spectrum-4 switches.

Availability

New systems from Dell, HPE and Lenovo featuring the complete NVIDIA AI stack are expected in the first quarter of next year.

Intel Unveils Arc Pro GPU Products

Today Intel introduced the Intel® Arc™ Pro A-series professional range of graphics processing units (GPUs). The first products are the Intel Arc Pro A30M GPU for mobile form factors and the Intel Arc Pro A40 (single slot) and A50 (dual slot) GPUs for small form factor desktops. They all feature built-in ray tracing hardware, machine learning capabilities and industry-first AV1 hardware encoding acceleration.

Intel Unveils Arc Pro GPU Products

Intel Arc Pro A-series graphics are targeting certifications with leading professional software applications within the architecture, engineering and construction, and design and manufacturing industries. Intel Arc Pro GPUs are also optimized for media and entertainment applications like Blender, and run the open source libraries in the Intel® oneAPI Rendering Toolkit, which are widely adopted and integrated in industry-leading rendering tools.

Intel Arc Pro GPUs will be available starting later this year from leading mobile and desktop ecosystem partners.

For developers and content creators attending SIGGRAPH on Aug. 8- 11, demos using Intel Arc Pro systems and Intel oneAPI Rendering Toolkit can be seen at the Intel Booth, #427.

Specifications

  Intel Arc Pro A40 GPU Intel Arc Pro A50 GPUIntel Arc Pro A30M GPU (Mobile)
Peak Performance3.50 TFLOPs at Single Precision4.80 TFLOPs at Single Precision3.50 TFLOPs at Single Precision
Xe-core8x Ray Trace Cores8x Ray Trace Cores8x Ray Trace Cores
Memory6GB GDDR66GB GDDR64GB GDDR6
Display Outputs4x mini-DP 1.4 with Audio Support4x mini-DP 1.4 with Audio SupportLaptop Specific with Support for up to 4x
General50w Peak Power in a Single Slot Form Factor75w Peak Power in a Dual Slot Form Factor35-50w Peak Power and ISV Software Certified

SPECTRUM Digitizers and AWGs now support NVIDIA ClaraT

SPECTRUM Digitizers and AWGs now support NVIDIA ClaraT
NVIDIA-Clara rear

Spectrum Instrumentation now offers driver support for the NVIDIA Clara AGX™, a universal computing architecture for the next generation of AI medical instruments. The new drivers enable scientists and developers to choose from 64 different Spectrum Digitizers, Arbitrary Waveform Generators (AWGs) and Digital I/O cards, letting the NVIDIA Clara AGX kit perform high-speed electronic signal acquisition and generation for analog and digital signals. With the big variety of cards to choose from, users can exactly match their electronic signal requirements.

For example, the digitizer cards can be used to acquire signals in the DC to GHz frequency ranges by sampling them at rates from 5 MS/s up to a maximum of 5 GS/s. Similarly, the AWG cards can be used to produce signals with almost any wave shape and frequency content, from DC to 400 MHz, by outputting samples at speeds from 40 MS/s up to 1.25 GS/s. Individual analog cards offer one, two, four or eight channel capability. Digital I/O cards and Digital Data Acquisition cards allow the acquisition of digital data at rates up to 720 MS/s and can generate digital patterns at up to 125 MS/s. There are different interface options for TTL and LVDS available.

The NVIDIA Clara AGX developer kit provides an easy-to-use platform for developing software-defined, AI-enabled, real-time, point-of-care medical devices. It delivers real-time streaming connectivity and AI inference by combining the flexibility of the NVIDIA® Jetson AGX Xavier™ embedded Arm® system on a chip (SoC), the performance of the integrated NVIDIA RTX™ 6000 GPU, and the 100 GbE connectivity of the NVIDIA ConnectX® SmartNIC, Clara AGX. The kit also includes full-stack GPU-accelerated libraries, SDKs, and reference applications for developers, data scientists, and researchers to create real-time, secure, and scalable solutions.

Adding a Spectrum card to the Clara system allows sensor signals to be acquired, generated, stored and processed. Data can be streamed between the cards, the processor and the GPU. In fact, the high-speed parallel processing capabilities of the GPU make it the perfect platform for handling the large volumes of data that can be acquired and generated by the Spectrum products. Spectrum already offers SCAPP (Spectrum's CUDA Access for Parallel Processing) to make GPU-based data processing easily achievable, even at the fastest streaming rates.

M4i.6631-x8
 
Looking at some examples: One or two cards of the M2p-series (5 MS/s to 125 MS/s) can be installed into the kit, as in Pic 2. The AWG signal generator card with 4 channels and the digitizer with 8 channels are running fully synchronized at a speed of 125 MS/s on all channels. Choosing one card of the ultra-fast M4i-series, such as the popular M4i.6631-x8 in Pic 3, turns the kit into an extremely low-noise AWG with signal generation up to 1.25 GS/s output speed on two channels. Another example uses the M4i.2212-x8 digitizer card that is already a core part of the world's first, high-throughput cell sorter created by the University of Tokyo in 2019. It allows signal acquisition up to 1.25 GS/s on four channels, with up to 3.4 GBytes per second streaming and high-speed processing via Spectrums SCAPP drivers and the internal NVIDIA RTX 6000 GPU of the Clara kit.

The NVIDIA Clara is already being used in a number of biomedical research programs and next generation of medical devices. Applications include imaging, genomics, patient monitoring, and drug discovery. It can be found everywhere the healthcare industry is innovating and accelerating the journey to precision medicine. Now, by installing a Spectrum card, or cards, this powerful platform has an easy way to acquire and generate the fast electronic sensor signals that are often found in this cutting-edge field.

Oliver Rovini, Chief Technical Officer at Spectrum, said: "We were delighted to be approached by NVIDIA, and partner with them to develop drivers for their NVIDIA Clara platform. Spectrum already has many customers using our technology in medical science. Now, together with the NVIDIA Clara, they have an easy way to create very small, low power systems, that offer some of the most advanced data processing tools available today."

Like all products by Spectrum Instrumentation, the 64 different M2p- and M4i-series cards carry a 5-year product warranty, with free software and firmware updates, as well as customer support directly from the engineering team, for the whole lifetime of the product. For more information, please visit:

www.spectrum-instrumentation.com

All trademarks are the property of their respective owners.




Best Practices for Digital Multimeters



A digital multimeter works as a sort of electronic tape measure in order to make electrical measurements. It mainly measures amperes, ohms and volts, though some types may come with other special features. There are a number of best practices that should be followed in order to use multimeters correctly and safely.

You can find a great range of multimeters at RS Components.

Selection

Looking at not just basic specifications but also functions, features and overall value of design and production quality is necessary in order to ensure you choose the right digital multimeter for the particular job.

Reliability is of even greater importance in the current age and dependable digital multimeters will have been subjected to a rigorous program of evaluation and testing.

Another of the biggest concerns in the use of a digital multimeter is safety. A multimeter that offers sufficient component spacing, input protection and double insulation can help to prevent damage to the meter and personal injury if incorrectly used. A digital multimeter of good quality needs to be designed to meet the latest and most rigorous standards of safety.

Safety

Safely making measurements begins by selecting the right multimeter for the application and the environment it is in. Instrument user manuals should be read carefully prior to use, with particular attention paid to the Caution and Warning sections.

Safety standards have been established for working on electrical systems by the International Electrotechnical Commission. Meters should meet the IEC category and approved environment voltage rating.

Accurate results

The biggest allowable mistake that can take place under particular operating conditions relates to accuracy. Accuracy for a digital multimeter is normally expressed in the form of a reading percentage, with specifications sometimes including a range of digits included as part of the basic accuracy specification.

This is an indication of how many counts the display’s extreme right side digit could vary.

Measuring

The measurement of voltage is one of the simplest tasks that can be performed by a digital multimeter. The troubleshooting of a circuit normally begins with a test for the proper supply voltage and if there is an issue with the voltage, this will need to be corrected before any further investigation can take place.

Sinusoidal or nonsinusoidal waveforms are associated with AC voltages, with the latter normally caused by harmonics like those that some adjustable speed drives generate. A quality digital multimeter will show the root-mean-square value of such voltage waveforms.

The RMS value is the equivalent or effective AC voltage as a DC value. The majority of multimeters respond by providing accurate RMS reading to pure sine wave AC voltage signals, but cannot accurately measure nonsinusoidal signals. Those signals can only be accurately measured by a digital multimeter with a true RMS designation.

Being able to take true RMS measurements is growing in importance in modern electrical work because of the high amount of harmonic-generating nonlinear loads that are present in electrical circuits.

A digital multimeter is an incredibly versatile electrical tool, with a single device able to perform many different measurements when chosen correctly and used safely.

Market Reports

Market Report & Surveys
IndianWeb2.com © all rights reserved