Showing posts with label AI Danger. Show all posts
Showing posts with label AI Danger. Show all posts

Tech Giants Halt Hiring in Key Divisions as AI Costs Surge

Tech Giants Halt Hiring in Key Divisions as AI Costs Surge

Microsoft has reportedly paused hiring across major cloud and sales divisions as part of a broader effort to rein in labor costs, even while continuing to pour resources into artificial intelligence.

Microsoft’s decision to freeze hiring in its cloud and North American sales divisions is a clear signal of shifting priorities. The company is pulling back on workforce expansion in areas that have traditionally been its growth engines, citing the need to cut costs and improve margins.

This freeze applies to candidates who don’t yet have offers in hand, while those already extended offers remain unaffected. For employees, it means heavier workloads as teams operate without the reinforcements they were expecting. For job seekers, it’s a sudden pause in opportunities at one of the most influential tech employers.

Strategically, the move reflects the broader post-2025 trend in tech: profitability is being prioritized over aggressive expansion. Microsoft Azure, despite being a leader in the cloud market, faces intense competition from AWS and Google Cloud. Tightening hiring in sales and cloud suggests the company is focusing on efficiency and margin discipline rather than chasing growth at all costs.

This move from Microsoft mirrors a wider tech industry trend: companies are cutting traditional roles while channeling billions into AI infrastructure. Firms like Block, Close Brothers, and even media giants have announced layoffs or freezes, citing AI-driven efficiencies.

Microsoft’s Move

  • Affected divisions: Azure cloud and North American sales
  • Reason: Cost discipline amid rising GPU and AI infrastructure expenses
  • Exception: AI-focused teams (e.g., Copilot) continue to hire
  • Risk: Heavy reliance on OpenAI, which accounts for ~45% of Azure’s revenue backlog

Industry-Wide Scenario

  • Global trend: Two-thirds of CEOs froze or cut hiring in Q1 2026, while global AI capital spending surged to $2.5 trillion
  • Layoffs: Over 150,000 tech jobs have been cut in 2026, with at least 20% explicitly attributed to AI adoption
  • Examples:
    • Block (Jack Dorsey’s firm): Cut 4,000 jobs (~40% of workforce), citing AI tools replacing human tasks
    • Close Brothers (UK banking group): Cutting 600 jobs while rolling out AI “at pace”
    • CBS News: Announced 6% workforce reduction
    • IKEA’s parent company: Cutting 800 office-based roles, citing efficiency gains

Comparative Snapshot

Company Action Taken AI Investment/Driver
Microsoft Hiring freeze in cloud & sales; AI teams exempt Rising GPU costs, OpenAI dependency
Block 4,000 jobs cut (40% workforce) AI tools replacing human tasks
Close Brothers 600 jobs cut AI rollout in banking
CBS News 6% layoffs AI-driven newsroom efficiencies
IKEA (parent) 800 office roles cut AI + automation in operations
Global CEOs (survey) 66% froze/cut hiring $2.5T capital spending on AI

Risks & Challenges

  • Margin squeeze: AI infrastructure costs (notably GPUs) eroding profitability
  • Workforce disruption: Tens of thousands of jobs eliminated or frozen, especially in non-AI divisions
  • Concentration risk: Heavy reliance on single AI partnerships (e.g., Microsoft–OpenAI)
  • Investor pressure: Balancing cost discipline with AI growth promises

This move reflects the broader post-2025 tech industry shift: companies are tightening hiring even in high-growth divisions like cloud, balancing expansion with profitability. For Gurugram’s tech and editorial ecosystem, where Microsoft’s cloud services are widely used, the freeze underscores the importance of cost efficiency and strategic scaling in global tech operations.

Microsoft’s hiring freeze in cloud and sales is more than a U.S. story—it’s a cautionary signal for India’s enterprise ecosystem. As AWS and Google double down locally, Microsoft’s pause may reshape competitive dynamics in one of the fastest-growing cloud markets.

India’s Cloud Market Ripple Effect

Microsoft’s hiring freeze in U.S. cloud and sales divisions could slow global expansion plans, indirectly affecting India’s enterprise adoption of Azure. With AWS and Google Cloud aggressively scaling their India operations, Microsoft’s pause may create an opening for rivals to capture market share in sectors like BFSI, healthcare, and government digitization.

Talent Pipeline Disruption

India has been a major beneficiary of Microsoft’s global hiring cycles, especially in engineering and sales support. A freeze in North America often signals tighter controls worldwide. This could mean fewer lateral hires in India’s cloud sales teams, slowing Azure’s ability to win new enterprise contracts.

Cost Discipline vs. Growth in India

Microsoft’s pivot toward margin discipline mirrors a broader industry trend. For India, where cloud adoption is still accelerating, this raises a key question: will Microsoft prioritize profitability over aggressive customer acquisition? If so, AWS and Google Cloud may gain ground by offering more flexible pricing and localized services.

Enterprise Customer Impact

Large Indian enterprises—banks, IT services firms, and government agencies—depend on Microsoft’s cloud stack. A slowdown in sales hiring could affect deal velocity, customer onboarding, and support responsiveness. This is particularly critical as India pushes digital public infrastructure and AI adoption at scale.

Investor & Policy Angle

For India’s policy and investment community, Microsoft’s freeze is a reminder that global tech majors are recalibrating. It underscores the importance of nurturing domestic cloud players and ensuring resilience against global hiring cycles that may affect service delivery.

IBM Replaces Hundreds of HR Professionals With AI Agents

IBM Replaces Hundreds of HR Professionals With AI Agents

IBM has replaced about 200 human resources professionals with AI agents. The company’s AskHR AI agent now automates 94% of routine HR tasks, including pay statements, reported The Wall Street Journal.

However, instead of reducing overall employment, IBM has redirected resources to hire more programmers, salespeople, and marketing professionals—roles that require critical thinking and human interaction.

IBM CEO Arvind Krishna emphasized that this shift has actually led to more hiring in other areas, such as software engineering, sales, and marketing.

While we have done a huge amount of work inside IBM on leveraging AI and automation on certain enterprise workflows, our total employment has actually gone up, because what it does is it gives you more investment to put into other areas,” Krishna told The Wall Street Journal.

IBM’s approach reflects a broader trend of using AI to handle repetitive tasks while reallocating resources to roles that require critical thinking and human interaction.

IBM has also expanded its generative AI division, which has grown into a $6 billion business, offering tools that allow customers to build AI agents capable of autonomously carrying out complex tasks.

This shift reflects a broader trend where AI is reshaping job roles rather than simply eliminating them.

Several tech companies have started replacing HR professionals with AI agents to streamline operations and improve efficiency. Klarna, the fintech company replaced 700 customer service agents with AI chatbots to handle inquiries and transactions. Cisco, UPS, Duolingo, and Intuit have also integrated AI into their workforce, leading to job reductions in certain areas.

The trend is growing as businesses seek to optimize costs and enhance productivity.

OpenAl and Deepmind Employees Warn of Al Dangers Including Human Extinction, that Companies Are Hiding

OpenAl and Deepmind Employees Warn of Al Dangers Including Human Extinction, that Companies Are Hiding

There has been a significant & serious development regarding AI safety concerns. A group of current and former employees from OpenAI and Google's DeepMind have come forward with an open letter —righttowarn.ai, warning about the potential dangers associated with advanced AI technologies, including human extinction. They allege that these companies are prioritizing financial gains over safety and are not being transparent about the risks involved.

The letter emphasizes the need for better oversight and regulation to prevent serious harms, such as the further entrenchment of existing inequalities, manipulation, misinformation, and even the loss of control over autonomous AI systems. The employees are advocating for a culture of open criticism and are calling for solid whistleblower protections to enable the discussion of these risks without fear of retaliation.
 
OpenAl and Deepmind Employees Warn of Al Dangers Including Human Extinction, that Companies Are Hiding


This is a developing story, and it highlights the importance of ethical considerations and transparency in the field of AI development. It's crucial for AI companies to engage with governments, civil society, and other stakeholders to ensure that AI technologies are developed responsibly and safely.

Specific Risks Employees Are concerned

The employees from OpenAI and Google DeepMind have raised concerns about several specific risks associated with the development and deployment of advanced AI systems. These include:

Entrenchment of Existing Inequalities: Advanced AI could exacerbate social and economic disparities if its benefits are not distributed equitably.

Manipulation and Misinformation: AI systems could be used to create and spread false information, potentially influencing public opinion and undermining trust in institutions.

Loss of Control: There is a risk that autonomous AI systems could become uncontrollable, leading to unintended consequences.

Human Extinction: The letter mentions the extreme risk that unregulated AI poses, including scenarios that could lead to human extinction.

The group behind the open letter has urged AI firms to facilitate a process for current and former employees to raise risk-related concerns and not enforce confidentiality agreements that prohibit criticism. They emphasize the need for transparency and oversight to ensure that AI development does not compromise safety or ethical standards.

Humanity May Need to Pause Al in the Next 5 Yrs, Said New CEO of Microsoft Al

Humanity May Need to Pause Al in the Next 5 Yrs, Said New CEO of Microsoft Al

The new CEO of Microsoft AI, Mustafa Suleyman, while speaking at the global AI safety summit last year, mentioned the possibility of a pause in AI development towards the end of the decade.

According to The Guardian article, Mustafa stated, "I don’t rule it out. And I think that at some point over the next five years or so, we’re going to have to consider that question very seriously". As the new head of Microsoft AI, he will be balancing his caution with the drive to innovate and commercialize AI technologies.

Previously, Mustafa had said, "The world is still struggling to appreciate how big a deal [Al's] arrival really is."

"We are in the process of seeing a new species grow up around us", Mustafa had said. He also thinks this new species may be capable of becoming self-made millionaires in as little as 2 years."

Mustafa is not alone in a warning views on AI, as Google DeepMind's Chief AGI Scientist Shane Legg also said, "If I had a magic wand, I would slow down. Artificial General Intelligence is like the arrival of human intelligence in the world. This is another intelligence arriving in the world."

Notably, Mustafa Suleyman is a prominent figure in the field of artificial intelligence. He co-founded DeepMind Technologies, which became a leading AI company and was later acquired by Google. After his tenure at DeepMind, Suleyman went on to co-found Inflection AI, focusing on machine learning and generative AI.

Mustafa has recently been appointed as the CEO of Microsoft AI, where he is expected to lead the development of consumer AI products and research.

Mustafa's career has been marked by his contributions to AI and his advocacy for ethical AI practices. His leadership at Microsoft AI is anticipated to further the company's AI initiatives while navigating the complex landscape of AI ethics and societal Impact.

Interestingly, Microsoft's chief scientific officer, Eric Horvitz, has expressed the opposite view, stating that an "acceleration" in Al development is necessary, rather than a pause. It's important to note that discussions about the pace of Al development are ongoing in the tech community, with various experts having different opinions on the matter.

The debate around pausing AI development stems from various concerns raised by experts and the public. Here are some reasons why some think a pause might be necessary:

1. Rapid Advancement: AI is advancing at a pace that may outstrip our ability to understand its implications and establish adequate safeguards.

2. Safety and Ethics: There are fears that without proper oversight, AI could be used in ways that are harmful or unethical. This includes concerns about privacy, security, and the potential for AI to perpetuate biases.

3. Regulatory Catch-Up: A pause could provide time for policymakers to catch up with the technology and create regulations that ensure AI is developed and used responsibly.

4. Unintended Consequences: As AI systems become more complex, the risk of unintended consequences increases. This could include the misuse of AI by malicious actors or the AI acting in unpredictable ways.

5. Societal Impact: There's a concern about the impact of AI on jobs, social structures, and the economy. A pause could allow for a more thoughtful consideration of how to integrate AI into society in a way that benefits everyone.

These concerns highlight the need for a balanced approach to AI development, one that promotes innovation while also ensuring safety, ethical use, and societal well-being. It's a complex issue with no easy answers, but the conversation is crucial as we navigate the future of AI.

OpenAI Researchers Warned Board of AI Discovery That Could Threaten Humanity

OpenAI Researchers Warned Board of AI Discovery That Could Threaten Humanity

Before the series of events that took at ChatGPT maker OpenAI's leadership, including firing of Sam Altman and then re-hiring, several staff researchers wrote a letter to the the company’s board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity, said a report by news agency Reuters.

Citing two people privy to this unreported letter from researchers, the Reuters report said that the letter and the 'dangerous' AI algorithm were key developments before the board's ouster of Sam Altman.

Reuters' sources pointed out the letter as one of the long list of factors, that led to Altman's firing last week, which were concerning for the board to commercialize the AI (cited dangerous in the letter) before understanding its consequences.

A project called Q* (pronounced Q-Star) could be a breakthrough in the OpenAI's search for what's known as artificial general intelligence (AGI), said the report citing one of the people at the AI company.

Unlike classical AI, AGI is Cognitive and can generalize, learn and comprehend in contrast to a calculator that can solve a limited number of operations but cannot learn.

In their letter to the board, researchers flagged AI’s great aptitude and potential danger, said the report citing sources but does not specify the exact safety concerns noted in the letter. Notably, there has been a long time discussion among computer scientists about the danger posed by highly intelligent machines, for instance if they might decide that the destruction of humanity was in their interest.

AGI has been defined as an autonomous system that surpasses human capabilities in the majority of economically valuable tasks.

Altman led the ChatGPT to be one of the fastest growing software applications in history and attracted investment as well as required computing resources from Microsoft to get closer to this AGI — labelled dangerous.


64% Indians Confident That AI Will Make Lives Easier and Society Smarter - Salesforce Research

Salesforce, the Customer Success Platform and world's #1 CRM company, today unveiled findings from new research titled, Artificial Intelligence in Asia: Trust, Understanding and the Opportunity to Re-Skill.

Commissioned across seven Asian markets including Singapore, India, Hong Kong, Malaysia, Thailand, Philippines and Indonesia, the study explores consumer trust and understanding of Artificial Intelligence (AI) and the opportunities for business and government to grow knowledge, re-skill and prepare consumers for the future.

Amongst the markets surveyed, the results reveal that Indian consumers are well aware of AI (78%) and India emerged as the second market (with Indonesia at 68%) with the highest positive outlook towards AI (65%). The majority also believe that AI has the ability to help make society smarter (60%) in the future. Most Indians surveyed (64%) also feel AI will transform the employee space, by providing freedom to work, creating interesting jobs, and many see AI as a way to make society smarter (60%) and life more convenient (58%).

There is a clear correlation between understanding and outlook. As AI permeates the enterprise, it is undoubtedly altering consumer spaces and preferences where respondents who reported a higher level of understanding of AI (78%) were statistically more likely to have a positive outlook towards AI products and services. 54% think the more they understand and are made aware of AI, the more they will trust it. And, with 65% of respondents feeling positive about AI, it’s now the role of businesses to demystify AI for consumers encouraging adoption and educating customers on its benefits.

“Artificial Intelligence (AI) is coming of age. As businesses continue to find new ways to innovate with AI technologies, awareness and education on its benefits for consumers will become increasingly important. By leveraging AI across businesses, we have the opportunity to amplify our human intelligence, connect to our consumers and impact societies like never before,” says Sunil Jose, senior area Vice-President and Country Leader, Salesforce India.

64% of Indians are confident that new jobs will emerge from AI, giving them the opportunity to do better and more interesting jobs. Upskilling will be crucial and 56% of Indians are ready to upskill and make themselves relevant for jobs of the future.

“AI will redefine the skills needed for new jobs, forcing businesses and individuals to re-evaluate what skills are needed to be successful. At Salesforce, we recognise this shift, and are preparing future workers through our online learning platform, Trailhead. Trailhead is empowering everyone to learn the skills needed to be successful in the fourth industrial revolution. The future of businesses is an active partnership between human and machine, and with Trailhead we are ensuring humans are ready for the opportunities to come,” says Sunil Jose, senior area Vice-President and Country Leader, Salesforce India.

In addition to overall AI awareness, the study explores the real-world applications of AI including robo advisors, content recommendations, chatbots, product recommendations, and voice assistants. Consumer trust and awareness vary dramatically across different AI applications, with 86% of Indians opting for AI over humans to recommend them content. The study also reveals that Indians are open to AI managing and optimizing their finances (59%). Additionally, the study found that there is a general openness towards interacting with chatbots (63%) and awareness of voice assistants is the highest amongst all AI applications (83%).

This research was conducted by YouGov, commissioned by Salesforce and covers seven Asian markets: Singapore, Hong Kong, India, Thailand, Malaysia, Indonesia and Philippines with a sample size of 1000 consumers in each market. The survey measures Asian consumers’ awareness, understanding and usage of AI products and services as well as examines their outlook of AI to understand how they feel about AI impacting their lives and the world at large. The survey also explores trust levels of consumers towards six real-world applications of AI – including smart chatbots, self-driving cars and robo-advisors.

Unedited ~ Business Wire India

[Top Image - Andy Kelly on Unsplash]

Bias, Not Robots, Is The Real AI Danger

Tech magnate Elon Musk recently termed Artificial Intelligence (AI) as “the greatest risk we face as a civilisation.” But, Google’s AI chief, John Giannandrea, has different point of view to offer on the issue.

According to Giannandrea, it isn’t the robots that we have to worry about, it is the danger that might be lurking inside these machine-learning algorithms used to make millions of decisions every minute.

Speaking before a recent Google conference on the relationship between humans and AI systems, Giannandrea said, “The real safety question, if you want to call it that, is that if we give these systems biased data, they will be biased.”

Considering the fact that AI is currently spreading like wild fire, the problem of bias in machine learning has the potential of becoming more significant as the technology enters into more critical spheres of life such as law and medicine, and people without a deeper technical understanding of the technology are given the task of deploying it.

Some AI experts have even warned that it isn’t that the bias is lurking somewhere in the vicinity, the algorithmic bias is already pervasive in many industries, but almost no one is making an effort to identify it leave alone correct it.

Giannandrea believes that one way to counter this bias is to be transparent about the training data that is being used and keeping an eye out for all the hidden biases in it, otherwise we will be developing biased systems without even knowing it.

Giving an example of the situation above, he says, “If someone is trying to sell you a black box system for medical decision support, and you don’t know how it works or what data was used to train it, then I wouldn’t trust it.”

For the uninitiated, black box machine-learning models have already been having a major impact on the lives of some people. For instance, Northpointe’s COMPAS predicts defendants’ likelihood of reoffending, and is currently being used by some judges to determine whether an inmate should be granted parole or not. However, how COMPAS works is still a well kept secret. Recently, an investigation by ProPublica unearthed evidence proving that the system might actually be biased when it comes to minority defendants.

However, experts believe it is not always about publishing details of the data or the algorithm being put to use. A majority of the emerging machine-learning techniques have such a complex and opaque working that they easily defy careful examination. In order to trump this issue, AI researchers are nowadays working on ways through which they can make these systems give some approximation of their workings to not only the engineers but also the end users.

It is important to highlight the bias creeping in AI as more and more tech giants are exploring the technology in different projects. Google is among several big tech firms pitching the AI capabilities of its cloud computing platforms to all sorts of businesses. Though these cloud-based machine-learning systems are a lot easier to use than the underlying algorithms, but they could also make it easier for bias to get in. Hence, it is important to provide necessary knowledge and training to less experienced scientists and engineers on how to spot and remove bias from their training data.

Karrie Karahalios, a professor of computer science at the University of Illinois, also took the podium at the conference to bring to light how tricky it is to spot bias in even the most commonplace algorithms. Giving example of how Facebook newsfeed functions, Karahalios highlighted that users generally have no understanding about how Facebook filters the posts shown in their news feed. While on the surface it might seem not that serious, but the case is just a one off example of how difficult it is to interrogate an algorithm.

Hence, one can surely say, killer robots are the least of our problems right now. The first and foremost task is to identify bias in training data and kill it then and there itself.

This development was first reported in MIT Technology Review.

[Image: AI Business]

Market Reports

Market Report & Surveys
IndianWeb2.com © all rights reserved