AI against AI
Digital signatures, like seal certificates, ensure uniqueness and preventing forgery.
In the past, single-page scam e-commerce websites proliferated, yet in 2022, fewer than 3,000 such sites were taken down. Following the establishment of the Ministry of Digital Affairs, the number of such sites blocked in 2023 surged to over 30,000, averaging nearly 100 daily.
Traditionally, organized fraudsters utilized live calls for one-on-one scams, potentially requiring training to mimic celebrities. However, the maturation of AI technology has significantly “empowered” fraudulent activities.
With the advent of deepfake technology two years ago, I proactively sought to inoculate the public by commissioning a deepfake video of myself, utilizing an actor. Due to our facial similarities, without explicit clarification, viewers might simply presume Audrey Tang had altered her appearance, oblivious to the impersonation.
This video’s creation demanded neither sophisticated equipment nor exorbitant costs, employing readily accessible technology. A laptop and twelve hours suffice for completion, hardly constituting a major undertaking.
Organized fraudsters have now embraced AI for communications on messaging platforms, employing text, voice, and even video calls, crafting illusions of celebrity interactions to deceive individuals.
Given AI’s capacity to simultaneously converse with thousands, the scale of deception and resulting harm is considerable.
The Ministry of Digital Affairs has adopted an “AI against AI” strategy, developing tools such as “Fraud Pattern Analysis” and “Anti-Fraud Radar”. These innovations daily identify thousands of dubious ads and e-commerce listings, facilitating their review and preemptive removal by major platforms.
Additionally, last year’s amendments to the Securities Investment Trust and Consulting Act introduced joint and several liabilities for platforms. Should victims attribute their deception to platform-hosted ads, which, despite reports, weren’t withdrawn within 24 hours, platforms must assume financial liability for the fraud.
In our continuous struggle against organized fraud, a balance of offense and defense is essential. As audio-visual synthesis technology becomes more sophisticated and misleading, we need more unfalsifiable measures to escalate the challenge of posting scam ads.
“Authenticity” stands as the paramount fraud deterrent. What better represents an individual’s identity than a hard-to-fake “digital signature”?
Although citizen digital certificates, corporate certificates, and organizational certificates have been utilized for years in Taiwan, prior amendments did not presume the same legal authority as signing on paper personally. The recent amendment to the Electronic Signature Act equates digital signatures with seal certificates, ensuring uniqueness and preventing forgery.
Looking ahead, investment ads on major platforms must authenticate personal involvement or genuine celebrity endorsement via digital signatures.
Should ads be flagged as suspicious and withdrawn, reposting under a new name won’t viable for fraudsters, as perpetual name changes at registration offices are impractical.
As we venture into the AI era, seeing can no longer be equated with believing. It’s imperative for defenders to proactively engage, leveraging AI against AI.