Executive Summary
As China continues to expand its global data reach, this report examines the authoritarian features of its data governance model and identifies the systemic risks it poses. It further analyzes how the data practices of China-linked artificial intelligence services support and reinforce this model, potentially subjecting users in democratic countries to authoritarian control.
The report then outlines the broader risks these practices pose to privacy and democratic governance and proposes a three-part “DSR Strategy” for democracies: Defend public systems, Screen Chinese AI services, and Rally democratic allies to promote harmonized cross-border data regulations.
The name is deliberately chosen to counter China’s own DSR—the Digital Silk Road—which forms part of Beijing’s broader strategy to expand geopolitical influence by exporting digital infrastructure and shaping global data norms.
I. Data Practices: Three Pathways for Data Flowing from Chinese AI Services to the PRC
Drawing on relevant privacy policies (as of November 15, 2025) and credible media reports, this report examines ten widely used generative AI services that are considered to have operational ties to Chinese companies. These include DeepSeek, Doubao, Cici, Kimi, Quark, Qwen Chat, Baidu AI Search, Monica, Manus, and Talkie.
Based on their privacy policies, the report identifies three main pathways through which personal data may be transferred into the PRC:
First, five services—DeepSeek, Doubao, Kimi, Quark, and Baidu AI Search—are operated by entities registered in China and state that they store overseas user data within the PRC.
Second, the remaining five services are operated by entities registered in Singapore. However, with the exception of Talkie, nine of the ten services permit intra-group data sharing. This means that even if a service is offered by a Singapore-based entity, personal data from overseas users may still be shared with affiliated companies located in China.
Third, all ten services indicate that they will comply with “lawful” government data access requests.
*Author’s Note: On December 30, 2025, Meta announced its acquisition of Manus and stated that, upon completion of the acquisition, Manus would sever all operational ties with China. However, since the purpose of this report is to analyze how the data practices of Chinese AI services support or are shaped by China’s authoritarian data governance model, the analysis of Manus (and Monica) based on their practices as of November 15, 2025 remains relevant. In fact, Meta’s emphasis that Manus will “completely sever ties with China” after the acquisition only reinforces the significance of the concerns addressed in this report. Therefore, the report retains the full analysis of Manus’s data practices as a pre-acquisition baseline.
II. The Authoritarian Gaze: Asymmetrical Data Flows and Unchecked State Access
Under the PRC’s data governance framework, the Chinese government can compel companies to provide data, including data held overseas, under at least three legal justifications: data security reviews, intelligence collection mandates, and counter-espionage operations.
Concurrently, China imposes data localization requirements and cross-border transfer restrictions on “critical data” and “personal data.” These rules limit outbound data flows, enabling the state to construct an asymmetrical data regime—one that draws global data inward while erecting barriers to prevent its exit.
Government data access is further shaped by the party-state’s dominant control. First, China’s sweeping “holistic national security” doctrine grants Beijing arbitrary authority to obtain data at will. Second, data requests from security agencies are entirely administrative, with no meaningful judicial checks. Third, China’s personal data protection laws include broad exemptions that effectively shield state actors from compliance.
This governance model—defined by a lack of legal constraints, judicial checks, and privacy protections—allows the party-state to extract and exploit data as it sees fit.
III. Democratic Risks: Chinese AI as a Source of Intelligence and a Driver of Information Manipulation
Many of the examined services require user consent for extensive data collection, including user-provided content and sensitive personal information—both directly submitted and indirectly inferred through data processing.
As a result, democracies face systemic risks at both individual and societal levels. Large-scale data extraction not only facilitates invasive personal profiling but also reveals societal vulnerabilities and fault lines. These insights, in turn, strengthen the PRC’s capacity to use AI for intelligence gathering and foreign information operations.
According to an investigation by U.S. threat intelligence firm Recorded Future—which analyzed open-source intelligence including PLA media reports and patent filings—China’s military and defense industry have likely integrated models like DeepSeek and Qwen into the development of intelligence tools. The case of GoLaxy (中科天玑) further illustrates that China continues to deploy AI-driven personas targeting Taiwan, pushing tailored content to exploit social divisions and shape public discourse. In this context, Chinese AI functions both as a source of insight into individual and societal vulnerabilities and as a driver of more targeted and potent information manipulation.
Policy Recommendations
To address these risks, this report proposes a three-part DSR strategy for Taiwan and its democratic partners. While the primary focus is on AI services, the proposed measures may also apply to other Chinese digital services that exhibit similar data practices.
- Defend public systems by banning AI services that are substantially controlled by China-based or Chinese-owned entities from use in government and critical infrastructure.
- Screen these Chinese AI services through inbound review mechanisms that condition market access on compliant data practices—particularly by default prohibiting the transfer of user data to China.
By centering on data practices, this operational-centric approach offers a more practical path for Taiwan than the ownership-centric model—such as TikTok bans—currently being promoted in the United States.
- Rally democratic allies to harmonize cross-border data regulations and build collective resilience against authoritarian data exploitation.
Take the U.S. as an example. Its current legal tools offer only partial safeguards: the informational-materials exception under IEEPA limits regulatory scope; CFIUS is triggered only by transactions; and the TikTok law remains definition-bound and ownership-centric. None of these provide a sufficient legal basis for imposing ongoing operational restrictions on foreign-based AI services.
New legislation is needed to establish robust data transfer controls and align U.S. policy with that of Taiwan and other partners.