Taiwan Bans DeepSeek AI Over Nationwide Safety Considerations, Citing Information Leakage Dangers

Taiwan has turn out to be the most recent nation to ban authorities businesses from utilizing Chinese language startup DeepSeek’s Synthetic Intelligence (AI) platform, citing safety dangers.

“Government agencies and critical infrastructure should not use DeepSeek, because it endangers national information security,” in keeping with an announcement launched by Taiwan’s Ministry of Digital Affairs, per Radio Free Asia.

“DeepSeek AI service is a Chinese product. Its operation involves cross-border transmission, and information leakage and other information security concerns.”

DeepSeek’s Chinese language origins have prompted authorities from varied nations to look into the service’s use of private knowledge. Final week, it was blocked in Italy, citing a lack of understanding relating to its knowledge dealing with practices. A number of corporations have additionally prohibited entry to the chatbot over related dangers.

The chatbot has captured a lot of the mainstream consideration over the previous few weeks for the truth that it is open supply and is as succesful as different present main fashions, however constructed at a fraction of the price of its friends.

Cybersecurity

However the giant language fashions (LLMs) powering the platform have additionally been discovered to be inclined to varied jailbreak strategies, a persistent concern in such merchandise, to not point out drawing consideration for censoring responses to matters deemed delicate by the Chinese language authorities.

The recognition of DeepSeek has additionally led to it being focused by “large-scale malicious attacks,” with NSFOCUS revealing that it detected three waves of distributed denial-of-service (DDoS) assaults aimed toward its API interface between January 25 and 27, 2025.

“The average attack duration was 35 minutes,” it stated. “Assault strategies primarily embrace NTP reflection assault and memcached reflection assault.”

It additional stated the DeepSeek chatbot system was focused twice by DDoS assaults on January 20, the day on which it launched its reasoning mannequin DeepSeek-R1, and 25 averaged round one-hour utilizing strategies like NTP reflection assault and SSDP reflection assault.

The sustained exercise primarily originated from the USA, the UK, and Australia, the risk intelligence agency added, describing it as a “well-planned and organized attack.”

Malicious actors have additionally capitalized on the excitement surrounding DeepSeek to publish bogus packages on the Python Bundle Index (PyPI) repository which might be designed to steal delicate info from developer programs. In an ironic twist, there are indications that the Python script was written with the assistance of an AI assistant.

The packages, named deepseeek and deepseekai, masqueraded as a Python API consumer for DeepSeek and have been downloaded not less than 222 occasions previous to them being taken down on January 29, 2025. A majority of the downloads got here from the U.S., China, Russia, Hong Kong, and Germany.

“Functions used in these packages are designed to collect user and computer data and steal environment variables,” Russian cybersecurity firm Optimistic Applied sciences stated. “The author of the two packages used Pipedream, an integration platform for developers, as the command-and-control server that receives stolen data.”

The event comes because the Synthetic Intelligence Act went into impact within the European Union beginning February 2, 2025, banning AI purposes and programs that pose an unacceptable danger and subjecting high-risk purposes to particular authorized necessities.

In a associated transfer, the U.Ok. authorities has introduced a brand new AI Code of Observe that goals to safe AI programs in opposition to hacking and sabotage by strategies that embrace safety dangers from knowledge poisoning, mannequin obfuscation, and oblique immediate injection, in addition to guarantee they’re being developed in a safe method.

Meta, for its half, has outlined its Frontier AI Framework, noting that it’ll cease the event of AI fashions which might be assessed to have reached a essential danger threshold and can’t be mitigated. Among the cybersecurity-related eventualities highlighted embrace –

  • Automated end-to-end compromise of a best-practice-protected corporate-scale surroundings (e.g., Totally patched, MFA-protected)
  • Automated discovery and dependable exploitation of essential zero-day vulnerabilities in at the moment in style, security-best-practices software program earlier than defenders can discover and patch them
  • Automated end-to-end rip-off flows (e.g., romance baiting aka pig butchering) that would end in widespread financial injury to people or firms
Cybersecurity

The danger that AI programs could possibly be weaponized for malicious ends will not be theoretical. Final week, Google’s Risk Intelligence Group (GTIG) disclosed that over 57 distinct risk actors with ties to China, Iran, North Korea, and Russia have tried to make use of Gemini to allow and scale their operations.

Risk actors have additionally been noticed making an attempt to jailbreak AI fashions in an effort to bypass their security and moral controls. A sort of adversarial assault, it is designed to induce a mannequin into producing an output that it has been explicitly educated to not, comparable to creating malware or spelling out directions for making a bomb.

The continued considerations posed by jailbreak assaults have led AI firm Anthropic to plan a brand new line of protection known as Constitutional Classifiers that it says can safeguard fashions in opposition to common jailbreaks.

“These Constitutional Classifiers are input and output classifiers trained on synthetically generated data that filter the overwhelming majority of jailbreaks with minimal over-refusals and without incurring a large compute overhead,” the corporate stated Monday.

Discovered this text attention-grabbing? Comply with us on Twitter ï‚™ and LinkedIn to learn extra unique content material we submit.

Recent articles

Russian Cybercrime Teams Exploiting 7-Zip Flaw to Bypass Home windows MotW Protections

î ‚Feb 04, 2025î „Ravie LakshmananVulnerability / Cyber Espionage A just lately...

N. Korean ‘FlexibleFerret’ Malware Hits macOS with Pretend Zoom, Job Scams

N. Korean ‘FlexibleFerret’ malware targets macOS with faux Zoom...

AMD SEV-SNP Vulnerability Permits Malicious Microcode Injection with Admin Entry

î ‚Feb 04, 2025î „Ravie LakshmananVulnerability / {Hardware} Safety A safety vulnerability...

Google Patches 47 Android Safety Flaws, Together with Actively Exploited CVE-2024-53104

î ‚Feb 04, 2025î „Ravie LakshmananVulnerability / Cellular Safety Google has shipped...