With the 2025 festive season under way, online shopping, travel bookings and digital payments surge – and so does the sophistication of fraud. We explain how AI-enabled scams are targeting consumers, from deepfakes and phishing to quishing and voice-cloned calls, and offer practical guidance for staying safe during the high-risk holiday period. As South Africans move into the peak of the 2025 festive season, the traditional surge in online shopping, travel bookings and digital payments is being matched by a parallel rise in criminal sophistication.
While year-end fraud is not new, the tools being deployed by syndicates have evolved sharply, with artificial intelligence increasingly reshaping how scams are designed, scaled and executed. Recent warningsfrom the South African Banking Risk Information Centre (Sabric) suggest that fraudsters are exploiting festive-season behaviour – urgency, distraction and higher transaction volumes – using AI-enabled techniques that blur the line between legitimate communication and deception. The result is a risk environment in which familiar safeguards are becoming less reliable, just as consumer exposure peaks.
Sabric’s Annual Crime Statistics Report for 2024provides a stark baseline. Digital banking fraud accounted for 65.3% of all reported financial crime incidents during the year, with banking apps emerging as the primary attack surface. Total financial crime losses reached R2.72-billion, of which R1.89-billion was attributed specifically to digital banking fraud.
Read Full Article on Daily Maverick
[paywall]
Sabric notes that most of these losses did not stem from system breaches or technical failures, but resulted from social engineering – scams that manipulate victims into authorising payments or handing over sensitive information themselves. This distinction matters, because it places the front line of risk not inside bank infrastructure, but at the point of interaction between consumers and increasingly convincing digital communications. The impact of AI lies less in inventing new scams, rather than in removing traditional warning signs.
Errors, awkward phrasing and generic greetings – cues many consumers have learnt to distrust – are increasingly absent. A common festive-season scenario involves a call or message claiming to flag suspicious activity on an account. With AI-generated voice cloning, the interaction may sound convincingly official.
Victims are then prompted to “verify” transactions by sharing a one-time PIN or approving a payment – actions that banks explicitly warn against, but which feel plausible under pressure.The combination of emotional spending, time scarcity and heightened digital traffic gives fraudsters a uniquely fertile operating environment. The shift from physical banking crime to app-based fraud signals a broader transformation in financial risk. Security is no longer confined to institutional systems; it increasingly depends on informed user behaviour.In a digital economy, vigilance has become not just a precaution, but a necessary form of financial self-protection, and as online payments and shopping continue to expand in 2026, these AI-driven risks are likely to remain a central concern for consumers and institutions alike.DM
[/paywall]