Microsoft Uncovers $4 Billion in Prevented Fraud Amidst Surge in AI-Driven Scams
Friday, Apr 25, 2025

The use of AI in scams is developing rapidly, with cybercriminals leveraging these technologies to find new victims, as detailed in Microsoft's latest Cyber Signals report.
Over the past year, Microsoft has thwarted $4 billion in fraud, successfully preventing around 1.6 million bot sign-ups every hour. This conveys the increasing scope of this threat.
The most recent Cyber Signals report from Microsoft, titled "AI-powered deception: Emerging fraud threats and countermeasures," explains how AI has reduced the technical hurdles for scammers, enabling even less-skilled individuals to carry out advanced scams with ease.
Tasks that once required days or weeks for scammers can now be completed within minutes.
The broadening of fraudulent capabilities marks a change in the criminal landscape impacting consumers and businesses on a global scale.
The report illustrates how AI tools now efficiently scan and extract web-based company information, aiding criminals in creating detailed profiles of potential targets for persuasive social engineering frauds.
Cybercriminals can craft intricate fraud strategies using fake AI-boosted product reviews and AI-generated storefronts, complete with invented business histories and customer testimonials.
Kelly Bissell, Corporate Vice President of Anti-Fraud and Product Abuse at Microsoft Security, notes the growing threat numbers. "Cybercrime has become a trillion-dollar issue and has been rising every year for the last three decades," according to the report.
"I believe there is an opportunity today to quickly integrate AI, allowing us to detect and cover exposure gaps swiftly. Now AI can significantly scale and assist in embedding security and fraud defenses into our products more rapidly."
Microsoft's anti-fraud team identifies AI-driven fraud occurring globally, with notable activity from regions like China and Europe, especially Germany, due to its large e-commerce presence in the European Union.
The report underlines that the larger a digital marketplace grows, the more proportional the attempts at fraud it will face.
Particularly troubling areas of AI-enhanced fraud involve e-commerce and job recruitment scams, where fraudulent sites can now be deployed within minutes using AI, requiring minimal technical skills.
These sites mirror legitimate businesses, utilizing AI-generated product descriptions, images, and reviews to deceive customers into thinking they are dealing with authentic merchants.
Adding further deception, AI-driven chatbots can convincingly interact with clients, postpone chargebacks with scripted excuses, and manipulate grievances with AI-created responses to make scam sites appear credible.
Job seekers also face risks. The report details how generative AI facilitates scam creation through fake listings on job platforms. Criminals fabricate fake profiles utilizing stolen credentials and use AI-generated job postings and email campaigns to target job seekers.
AI-empowered interviews and automated emails enhance the scam's convincing appearance, complicating identification. "Fraudsters may request personal details, like resumes or bank details, under the pretense of validating applicant info," according to the report.
Warnings include unexpected job proposals, payments requests, and informal communication platforms like text messages or WhatsApp.
In response to these threats, Microsoft has employed a comprehensive strategy across its products and services. Microsoft Defender for Cloud protects Azure resources, while Microsoft Edge includes protections against website typos and domain impersonations, utilizing deep learning to help evade fraudulent sites.
The company has improved Windows Quick Assist with alerts to warn users of potential tech support scams before allowing access to anyone pretending to be IT support. On average, Microsoft blocks around 4,415 suspicious Quick Assist connection attempts daily.
Microsoft has initiated a new fraud prevention policy under its Secure Future Initiative (SFI). Beginning January 2025, Microsoft product teams will have to conduct fraud prevention evaluations and integrate fraud controls into their design processes to ensure their products are "fraud-resistant by design."
With AI scams continually progressing, staying informed is essential. Microsoft advises consumers to be wary of urgent requests, verify website credibility before transactions, and avoid sharing personal or financial data with unverified sources.
For businesses, employing multi-factor authentication and deploying algorithms to detect deepfakes can help minimize risk.
Latest News
Here are some news that you might be interested in.

Friday, Apr 25, 2025
Reviving Europe's €200 Billion AI Aspirations in the Digital Economy
Read more

Friday, Apr 25, 2025
Group Challenges OpenAI's Move Away from Nonprofit Foundations
Read more

Friday, Apr 25, 2025
RAGEN: AI Framework Addresses Instability in LLM Agents
Read more

Thursday, Apr 24, 2025
Huawei Commences Large-Scale Shipping of Ascend 910C Despite US Restrictions
Read more