Peer Reviewed

Contextualizing critical disinformation during the 2023 Voice referendum on WeChat: Manipulating knowledge gaps and whitewashing Indigenous rights

Article Metrics
CrossRef

0

CrossRef Citations

5

PDF Downloads

36

Page Views

Outside China, WeChat is a conduit for translating and circulating English-language information among the Chinese diaspora. Australian domestic political campaigns exploit the gaps between platform governance and national media policy, using Chinese-language digital media outlets that publish through WeChat’s “Official Accounts” feature, to reproduce disinformation from English-language sources. These campaigns are situated within local contexts and technological conditions. We show how WeChat content uses emotional-appeal disinformation to capture attention. They rely on familiar but misleading stories to fill knowledge gaps, draw on historical references to ease uncertainty about the future, and downplay or erase Indigenous issues through colonial tropes. These emotionally charged messages are then amplified by WeChat’s algorithm, helping them reach even wider audiences.

IMAGE BY MOHAMED_HASSAN ON PIXABAY

Research Questions

  • How was the 2023 Constitutional Referendum in Australia covered on WeChat among Chinese-speaking Australians?
  • What disinformation narratives emerged when examined through the lens of Critical Disinformation Studies? And how did WeChat users engage with and respond to these narratives?
  • In what ways did the producers and distributors of disinformation succeed in attracting online attention while simultaneously evading both WeChat’s regulatory mechanisms and conventional definitions of disinformation? And finally, does this indicate that our current disinformation regulation may be fighting the wrong battle?  

Essay Summary

  • On October 14, 2023, Australians mandatorily voted in the Indigenous Voice to Parliament referendum, which failed. While English-language disinformation has been widely discussed (see Brancatisano, 2023; Carson et al., 2024; Graham, 2024) as a contributing factor, disinformation circulating within marginalized, non-English-speaking communities, particularly among the Chinese diaspora on WeChat, has received limited scholarly and regulatory attention.
  • We employed a mixed-method approach, combining quantitative content analysis and qualitative interpretive analysis grounded in the context of the Critical Disinformation Studies (Kuo & Marwick, 2021). Critical disinformation is contextualized as media strategies to consolidate and reproduce existing power hierarchies at the expense of populations lacking social, cultural, political, and economic power. Our dataset comprised 262 articles, 20 short videos, and 2,715 comments collected from WeChat from January to December 2023, with an in-depth analysis of a highly circulated video by “YamiChew,” a short-video account owned and operated by a member of the Chinese Australian community to share their everyday life in Australia.
  • Our findings show that Chinese-language accounts did not prominently cover the referendum. Of the 262 posts that did, 52 explicitly supported the “No” vote, while most avoided endorsements and used sensationalist tones.
  • In late September, both campaigns intensified engagement through short videos. Three individual and five organizational accounts generated over 2,500 comments, with more than 90% of these comments favoring “No.”
  • The YamiChew video exemplifies how subtle disinformation exploits knowledge gaps and socio-political uncertainties through flawed historical references and whitewashing strategies to capture communal interests and invalidate Indigenous rights. Its popularity reflects the intersection of WeChat’s affect-driven algorithms and Australia’s regulatory blind spots around Chinese-language media.
  • Platform-specific, culturally contextualized media and political literacy training co-designed with migrant communities that goes beyond simple fact-checking is required to effectively counter disinformation.

Implications

This study reveals how subtle forms of disinformation were spread on WeChat to Australian audiences that speak Chinese during Australia’s 2023 referendum on a constitutional change to create an Indigenous Voice advisory body in Parliament. Our work is situated within Critical Disinformation Studies (Kuo & Marwick, 2021), which moves beyond defining disinformation in terms of fact, intent, and harm. While these components remain important considerations, they risk narrowing the scope of analysis in ways that leave other forms of disinformation unexamined and thus outside regulatory scrutiny and enforcement, where they may cause greater harm. Aligning with the critical disinformation perspective, we also frame critical disinformation as a set of media strategies that blend facts and falsehoods to reinforce colonialism, manifested in existing racial hierarchies and underpinning the very ideological architecture of Australian institutions and national media in this research context. This critical and reflective stance, as opposed to claiming objective truth, allows our work to contextualize how the media discourse on the Constitutional referendum was weaponized to mobilize one group of marginal communities (Chinese-speaking Australians on WeChat) to harm another (Aboriginal and Torres Strait Islander peoples).

The October 14, 2023, Australian Indigenous Voice referendum proposed adding a purely advisory body for Aboriginal and Torres Strait Islanders, empowered to comment on matters before the Parliament and the executive branch of government. The constitutional change required a double majority: support from most Australian voters and from majorities in at least four of six states1Australia comprises six federated states—New South Wales, Queensland, South Australia, Tasmania, Victoria, and Western Australia—and ten federated territories. Of the three internal territories, the Australian Capital Territory (ACT) and the Northern Territory (NT) have their own legislatures, while the Jervis Bay Territory does not. In the Voice referendum, votes from the ACT and NT counted toward the national majority but not the required majority of states, creating unequal voting power across jurisdictions (Vivian, 2023). (Parliamentary Education Office, n.d.). The proposal failed on both counts. While advocates emphasized the Voice as a modest institutional reform to enable Aboriginal and Torres Strait Islander peoples’ input into policy, opponents mobilized doubts through discourses of uncertainty, cost, ineffectiveness, and alleged constitutional risk.

Rather than relying solely on overt falsehoods, campaigns opposing the Voice referendum on WeChat effectively used culturally relevant messaging, coded language aimed primarily at their target community, and strategic ambiguity to discredit a policy that started out with a 65% approval rating (Graham, 2024). Of note, WeChat’s Chinese-language Australian news translations, ads, and comments are largely out of the regulatory frameworks that govern English-language media in Australia. Existing Australian government regulations primarily focus on major English-language publishers—such as broadcast media and U.S.-owned major digital platforms that offer a limited degree of transparency to regulators and researchers—and their influence on electoral or referendum processes. Content producers on WeChat are effectively exempt from these rules due to the platform’s digital and non-English nature (Yang et al., 2024). On WeChat, content that does not touch on Beijing’s core interests generally escapes strict enforcement of platform governance—whether through manual reporting or algorithmic censorship—even when it contains problematic or misleading information—content that, paradoxically, drives much of the platform’s economy. Moderation of such content largely relies on a self-reporting mechanism by users, through which platform moderators evaluate reports before deciding whether to remove the content. The critical disinformation environment examined in this paper largely thrives within the regulatory gap between WeChat’s platform governance enforcement and Australia’s media regulatory environment.

Suffice it to say, making domestic disinformation in WeChat visible here demonstrates how second-language media are not given the same considerations as English media in Australia at present. This disparity remains critical in this nation where, between 2016 and 2021, 5.8 million people (22.8%) reported speaking a language other than English at home—a figure projected to rise in the upcoming census as immigration continues to outpace natural population growth. In 2021, nearly 3.9% of Australians spoke Mandarin or Cantonese at home, representing the largest non-English language group. Of these, 25.9% of Mandarin speakers and 23.7% of Cantonese speakers reported low English proficiency (Australian Bureau of Statistics, 2022), suggesting that Chinese-language media likely serves as their primary source of information.

The follow-on key implication is that traditional approaches to fact-checking and disinformation regulation, typically centered around platforms accessible to English-speaking majorities, premised on binaries of truth and falsehood, and centered on intention and harm, are insufficient for addressing critical disinformation within culturally, linguistically, and infrastructurally distinct and sophisticated media ecosystems. Our analysis demonstrates that, in the Australian context, existing regulatory frameworks are not focused on information sources for those that gather what they need to know in languages other than English due to perceived inaccessibility—whether stemming from platform infrastructure or language barriers for English-speaking regulators and researchers. Yet, there is no easy fix.

Designing disinformation policy is hard—it runs up against freedom of speech protections and the competing interests of regulators, technology companies, and political actors, including parties, interest groups, and social movements across a diverse range of socio-historical and common law traditions. These conflicts were exemplified in Australia’s Combatting Online Misinformation and Disinformation Bill that was to take effect for the 2025 federal election and measure disinformation based on truth and falsehood, intention, and severity of harm, but was withdrawn in November 2024. However, even if enacted, the reach of this bill would likely fall short in addressing platforms that are technologically and linguistically inaccessible to English-speakers—such as those used by marginalized communities, or those operating through encrypted channels or within the dark and grey web. Consequently, misleading content remains largely unchecked, particularly when delivered in culturally nuanced ways that evade straightforward classifications of misinformation or disinformation.

It is important to note that our claims of disinformation build from what Kuo and Marwick (2021) suggest is critical disinformation, which situates the production and normalization of disinformation within broader histories, cultures, and politics, and centers questions of social (in)justice/(in)equality in relation to historical power dynamics. Specific to the Australian context, we show the difficulty of policing and countering disinformation from Australia’s colonial histories and the socio-political positioning of Indigenous peoples and migrants of color, compared to the predominant white Australian society. Along the lines of critical disinformation studies, we define disinformation in our research context as media strategies that mix truths and falsehoods to reinforce racial hierarchies, perpetuate colonial narratives, and sometimes advance the interests of institutions and national media rooted within a colonial ideological architecture. Our data show local campaigns strategically deploy these types of critical disinformation in attempts to persuade and cement (colonial) power structures. Further, in some instances, the amplification of these messages might not be organic, based on volume and sentiment of engagement data that is outside historical norms for media accounts on WeChat. Nevertheless, this speech escapes regulation in part because platforms like WeChat are by law subject to legal oversight; they havefunctionally avoided classification by regulators due to their enclosed nature, lack of interfacing with Australian regulators, and limited algorithmic transparency that American counterparts may provide, and the content resists standard fact-checking frameworks.

The implication of framing disinformation in this critical way is allowing policymakers to look beyond external national security concerns and consider how domestic narratives also shape public understanding and political outcomes. As Kuo and Marwick (2021) argue, selective focus on national interest confines disinformation to an issue of democracies being threatened by authoritarian regimes and associates the phenomenon with digital technologies, echoing Seo and Faris’s (2021) empirical review of U.S.-centrism in disinformation studies. Further research emphasizes the complexity of disinformation, which operates across ethical, legal, and commercial dimensions, rendering legislation or technology alone insufficient to address the problem (Ruiz, 2023; Soliman & Rinta-Kahila, 2024; Walker et al., 2019).

In our data collection and analysis, we ground the Voice referendum disinformation campaigns in Australian history, culture, and politics, and center our questions on power and inequality, focusing on Australia’s structural and social racism against Aboriginal and Torres Strait Islanders on the continent. At stake are power relations where Aboriginal and Torres Strait Islanders might gain greater symbolic or institutional representation and are thus central to the referendum-disinformation that surrounded the official referendum campaigns.

Further implications include showing how the scope of prevailing definition of disinformation and its regulation in the Australian institutional context fail to account for the role of dominant English-language media and government institutions in spreading misleading content in relation to race, gender, epidemics, finance, climate change, and geopolitics (Cheng, 2020; Hunter, 2024; Kreiss et al., 2020; Peck, 2019; Zigler, 2015).

While disinformation produced and circulated in major social media services has been well-studied (including for Australia’s  Voice referendum, see Brancatisano, 2023; Carson et al., 2024; Graham, 2024), this paper examines the influence of disinformation (as critically defined) in marginalized media ecologies that influence outcomes for Aboriginal and Torres Strait Islander people, as well as all Australians.

Our findings below carry substantial consideration for future work at the policy and research nexus. Firstly, they highlight the urgent need for regulatory bodies, policymakers, and electoral authorities to extend oversight mechanisms beyond mainstream English-language platforms. Policymakers should ensure that transparency requirements and truth-in-political-advertising laws explicitly cover multilingual media environments to close regulatory gaps.

Secondly, there is a clear need for culturally sensitive media literacy programs tailored specifically to diasporic communities—what we call “fit-for-audience-and-purpose” civic education. Educators and community leaders should co-develop resources that help audiences critically assess subtle disinformation tactics, such as emotionally charged rhetoric used for the profit motives of (diasporic) media ecosystems. For instance, co-designed media literacy programs can educate communities on the current digital media ecology and emerging risks associated with AI-generated content based on academic research. They can also address the political economy of national media—highlighting how ownership structures and shifting funding models influence editorial orientations and public discourse—and explain how algorithmic systems shape content visibility. Such ongoing, culturally attuned media training empowers users to navigate their media environments more critically and supports the development of inclusive, community-informed moderation practices.

Thirdly, the study suggests a need for infrastructural interventions by platform providers. Tech companies doing business in Australia should conduct regular algorithmic audits, publicly disclosing how their content recommendation systems influence the reach and engagement of political messages across different language communities. Enhanced transparency from platforms would improve accountability and help mitigate the unintended amplification of misleading content. However, at this stage, Australia’s technology-neutral policy stance may present challenges to advancing this proposal.

Finally, cross-sector collaboration between regulators, technology companies, community organizations, and academics is essential for effectivity in rapidly identifying emerging critical disinformation trends and collaboratively develop responsive interventions. Such coalitions are especially valuable for anticipating and countering the evolving tactics of modern disinformation campaigns. They also strengthen democracy in diverse, multicultural societies by building diasporic engagement in civic discourse to establish public trust. However, such collaborations must remain cautious of centralized funding structures, which risk allowing the most financially invested stakeholders—often the big tech companies—to define the scope and criteria of disinformation, undermining the proposal’s intent to prioritize collaborative decision-making led by communities, journalism, and academic voices.

Findings

Finding 1: WeChat coverage of the Voice referendum peaked ahead of the election, featuring pro-Yes ads and negative-sensational posts, despite platform restrictions on political advertising.

Throughout 2023, coverage of the Voice referendum on WeChat peaked between September and October with the intensified political campaigning ahead of the referendum date on October 14. While political ads are formally not allowed on WeChat, a large number of political ad content about the referendum proliferated (24% of all referendum-related content posted). Of 262 WeChat Official Account (WOA) posts, 52 explicitly supported “No,” while sensational yet ostensibly neutral articles predominated coverage of the referendum, averaging 2,676 views each. Sixty-five campaigning ads were included in the collected posts: 11 (17%) campaigned for Yes, though most ads featured less overt Yes / No messaging. The presence of political ads on WeChat for Australian WOAs is surprising, considering the platform’s regulations, but this is explained by previous findings on its distributed platform capitalism, which prioritizes monetizing user attention through advertising (Yang et al., 2024). Pro-Yes banner ads (n = 58) appeared across multiple accounts, receiving up to 32,600 views.

On WOAs, ads from the Australian Electoral Commission (AEC) and politician Keith Wolahan were authorized as required by the AEC. However, as we will discuss in the following section, campaigning materials circulated as short videos via WeChat’s Channel function were either unauthorized or not properly authorized. This contravenes Section 321D of the Commonwealth Electoral Act 1918 that electoral communications include an authorization statement specifying the person or entity responsible for the communication and their address (Australian Electoral Commission, 2025). Our ongoing monitoring leading up to the 2025 Australian federal election revealed an increasing number of such cases, highlighting an evolving regulatory gap driven by tech deployment (Yang et al., 2025).

Finding 2: Short videos emerged as an engaging format for political communication, driving disproportionate engagement with pro-No disinformation.

Short videos are a new format for WeChat with a higher volume of engagement via comments (n = 2,582) compared to WOA posts—Voice referendum-oriented or otherwise—and prove a fertile ground for critical disinformation campaigns. For these short videos, 91.7% of comments favored “No,” versus 6.0% for “Yes,” with the rest choosing “uncertain” or “irrelevant.” Pro-No videos averaged 979 likes and 4,447 shares—outperforming pro-Yes content (110 likes, 359 shares) by orders of magnitude. These spreads of sentiment diverge drastically from voter and Chinese diaspora sentiment on the referendum, where the ethnic Chinese vote leaned to “Yes” (see Fu et al., 2025). Table 1 shows the disparity in user engagement between pro-Yes and pro-No content and highlights the importance of conceptualizing the anatomy of critical disinformation. On WeChat, we can identify elements that contribute to the effectiveness of disinformation campaigns and, by extension, understand the factors that hinder anti-disinformation efforts within the platform economy and governance.

 Total posts
(including campaign
ads)
Posts
without comments
Post
comments coded
“no”
Post
comments coded
“yes”
Post
comments
coded as “ambiguous”
WOA posts262129761047
Channel short
videos
200236815756
Table 1. Analysis from content type and comment sentiment in WOA posts and short videos.

Finding 3: Pro-No disinformation is framed through colonial tropes, anti-Labor sentiment, and racial or economic anxieties specific to Chinese-Australian contexts, skirting traditional media regulatory scrutiny.

While WOA posts generally remained ambiguous regarding their stance on the Voice, likely for advertising and user engagement considerations, comments on WOA posts showed patterns of critical disinformation: They adhered to identified disinformation narratives below and served to reinforce existing racial hierarchies in Australia against Indigenous populations, yet their content would be easily missed by regulators looking for a disinformation campaign. WOAs post news stories translated from English sources or opinion pieces of the WOA businesses themselves. The comments on these stories were largely sympathetic to the “No” campaign (n = 76), considering the small number (n = 10) of comments with pro-Yes views. Coding the “No” comments unearthed political opinion that echoed colonialist tropes, inflicting a celebratory tone about “civilization” and “British saviorism” in Australia to discredit a “Yes” vote.

Table 2 expands on the quantitative sentiment analysis of WOAs’ posts to highlight how the Yes ads and ambiguous Voice referendum content flowed into the public discussion, resulting in a higher volume of No-leaning discourse in online comments. Negative sentiment towards the referendum was not, for the most part, grounded in the issue of the referendum itself. Instead, concerns and anxious messaging particularly relevant to middle-class suburban Chinese-Australian communities were circulated alongside referendum discourse, triggering negative audience responses, even when referendum-related content was presented without explicit sentiment or opinions from media outlets. These concerns included anti-Labor government views (n = 24), concerns over racial inequality disadvantaging Chinese communities (n = 24), fears of increased taxation (n = 11), colonial primacy expressed through the embrace of British colonialism (n = 9), perceived irrelevance to Chinese Australians (n = 5), and unspecific reasons (n = 3). See the Methods section for examples of these themes in both Chinese and English.

ThemeFrequency (n = 76)
Anti-Labor24
Racial inequality24
Tax increase10
Colonial primacy9
Lack of relevance5
Uncodable3
Table 2. Qualitative analysis of “no” comment sentiment in WeChat Official Accounts’ posts and in ads.

Finding 4: Pro-No campaigners manipulate knowledge gaps and whitewash Indigenous rights for communal attention and trust, while leveraging colonial norms.

We further contextualize our quantitative findings with a case study on a short video, enabling an in-depth analysis of how disinformation narratives were deployed in the WeChat discourse. While WOAs rely on minimal curation to target subscribers, WeChat’s short videos employ algorithms that factor in user preferences, social networks, location, and demographics to extend reach beyond subscribers and amplify public engagement.

On September 27, 2023, a WeChat short-video account YamiChew published a video advocating a No vote. YamiChew was later identified as David Chu, a Liberal Party member and referendum volunteer (Cheng & Liu, 2023). Chu’s video reached over 10,000 reposts, 1,800 likes, and 300 comments within the first 24 hours of its publication, prompting further circulation and user engagement—consistent with platform algorithms that reward high-attention content. We selected YamiChew’s video for interpretative analysis not only due to its high user engagement, but also because of its distinct anti-sentimental, rational appeal. The video’s deliberately rational delivery appeared to obey platform logics, which typically reward emotional expression with greater content visibility, user engagement, and advertising revenue. What made YamiChew’s case particularly notable was that the popularity of his video stemmed from his rational tone—contrasting with typical campaigning videos that still relied on persuasive and emotionally charged messaging.

In the video, filmed at eye level, Chu stood on a park path wearing a black shirt with a “Vote No” badge, addressing the camera directly. He framed his No stance around two key claims: the potential erosion of Australia’s Constitutional integrity and opposition to perceived racial division through Indigenous recognition. His measured tone and detailed elaborations lent an appearance of rational argumentation to his “No” stance.

Chu supported his arguments with historical and legal references, though their validity warrants scrutiny. He claimed that his decision to vote No was informed by the AEC’s official referendum booklet, which presented eight reasons for a Yes vote and ten for a No vote. Two more reasons for a No vote. He further asserted that the Voice lacked legal comparability to established Western democracies such as the United States, the United Kingdom, and France. According to Chu, no equivalent “fourth constitutional body” exists in these systems, making the Voice in Parliament an unprecedented and legally unsound modification to Australia’s constitutional framework.

Chu criticized the term “Voice” as informal, arguing that the absence of Aboriginal recognition in Australian law since 1967 foregrounded the current inclusive, multicultural, and open society. He framed First Nations recognition as conferring race-based privileges that would impede societal progress and disadvantage all Australians, including Indigenous peoples and Chinese migrants. Chu claimed the recognition of Indigenous peoples would position Chinese migrants as a “third nation” below Indigenous peoples (the First Nation) and white people (the Second Nation) in a racial hierarchy. To support his position, Chu drew parallels to the Qing dynasty, suggesting that Manchurian elite privileges contributed to its decline. He further alleged that the Voice could be exploited by individuals with Indigenous heritage who live outside Indigenous communities, potentially undermining Indigenous interests.

Chu’s articulation is not only intentionally misleading but also serves to deny structural racism from society. For Chu, multiculturalism is achieved when race is entirely absent from public and legal discussion. He erroneously characterized the Voice as a “fourth constitutional power” despite its proposed advisory function without legislative or executive authority—a claim previously invalidated by AAP FactCheck (Williams, 2023). His interpretation of the 1967 referendum’s amendment to Section 51(xxvi) demonstrates a significant misreading of constitutional history. This amendment centralized race-based legislative authority under federal jurisdiction rather than eliminating it, thereby maintaining the settler-colonial governance while continuing the systematic exclusion of Aboriginal peoples from decision-making processes regarding their tangible and intangible resources on their lands and water.

Chu further manipulated knowledge gaps regarding Indigenous rights and political literacy around the Voice referendum among the Chinese diaspora, while also exploiting uncertainty about future implications. Chu achieved this by presenting a flawed comparison between colonized Indigenous peoples and Manchurian elites who were the governing power in the Qing dynasty. His arguments introduced or amplified fears and anxieties among Chinese migrants by suggesting the Voice would undermine democratic principles, reinforce discrimination against Chinese migrants, and contribute to societal decay. Despite factual inconsistencies, Chu’s eloquence, community status, and relatable historical analogies contrasted with pro-Voice information videos by social and political elites, frequently in English, rendering them less accessible to diaspora communities.

Methods

This paper addresses the questions: How was the 2023 Constitutional Referendum in Australia covered on WeChat among Chinese-speaking Australians? What disinformation narratives emerged when examined through the lens of Critical Disinformation Studies? And how did WeChat users engage with and respond to these narratives? In what ways did the producers and distributors of disinformation succeed in attracting online attention while simultaneously evading both WeChat’s regulatory mechanisms and conventional definitions of disinformation? And finally, does this indicate that our current disinformation regulation may be fighting the wrong battle? To answer these questions, we offer a mixed methods approach of quantitative content analysis and qualitative interpretive analysis (Wester et al., 2004) of WeChat Official Accounts and Channel short videos that is grounded in Kuo and Marwick’s (2021) Critical Disinformation Studies. Using the self-developed “share-capture” tool (Forydce, 2023), a bilingual researcher monitored 135 Australia-based accounts daily, exporting 262 relevant articles (including 65 campaigning ads) and 133 comments via Telegram for automated scraping; 20 short videos and 2,582 comments. Relevant public posts, along with their URLs, were exported from WeChat to Telegram’s group chat for researcher access and the bot’s data scraping. During the export process, the researcher categorized major themes of public posts and copied user comments anonymously into a group chat. A Telegram bot scraped the exported data for analysis. The tool worked well for WOAs, but faced challenges with short-video content due to the absence of unique URLs, requiring videos to be manually saved for further analysis. We offer a subset of coding work in Table 3, showing the Chinese text, researcher translation, and indicative code as per analysis above.

CodesTranslationOriginal
Anti-laborThe Labor Party has taken
power games to the
extreme.
 工黨將權力遊戲玩兒到極致了
Tax increaseIf this Voice passes, then be
prepared to pay more for
everything. There are only
downsides and no benefits.
这个voice如果通过了,
那就等着什
么都得掏更多的费用。
只有坏处没有好处
Colonial primacyWithout these immigrants
from outside, who were
predominantly British,
Australia would have
remained a pre-modern
society, let alone become
a developed country
as it is today.
没有外面这些以英国人为大多数的移民进入,澳大利亚肯定还停留在前现代社会,更不用说目前的发达国家了
Racial inequalityYou need to understand
this first: The referendum
is about granting Indigenous
people more privileges,
not about whether they
can participate in the
Parliament! The
proportion of
Indigenous people
in Parliament already
far exceeds their share
of the population
你先弄弄明白,
公投是要给予土著人更多的特权,
而不是能否参加议会!
土著人在议会的比例已经大大超过他们的人口占比!
Lack of perceived relevance to Chinese communitiesAre you a robber yourself
or an immigrant brought
by robbers? I am neither
so why should I bear the responsibility for others’ mistakes? 
你是自己强盗还是强盗带来的移民?
我两者都不是,
为什么要我承担别人犯错的责任?
Table 3. Coded themes and representative examples of user comments on WeChat, with corresponding English translations provided by the researcher.

We used quantitative content analysis (de Sola Pool, 1959) to count frequencies of referendum stance (yes/no/uncertain) and theme occurrences, and we also collected engagement metrics (views, likes, shares). This quantitative content analysis was completed by interpreting posts and comments as informed by Critical Disinformation Studies to code their meanings to Australian and diasporic political contexts and examine how disinformation was framed vis-à-vis social inequalities being expressed, reinforced, and justified through language, topic, and narrative structures. Following the line of critical disinformation, we have sought to code data points as disinformation to the extent they are situated in structures of power, grievance, and racism in Australia. Our approach is not meant to make a distinction between opinion and information (see Zaller, 1991) and instead focuses on dimensions of power.

The study builds on our ongoing research of WeChat’s “translational news ecology” (Yang et al., 2024) in Australia, where content is produced primarily through translation of English media into Chinese by media entrepreneurs, making the methods appropriate to explore and link our RQs to the WOA ecosystem that is the focus of this inquiry. The method can be also valuable for analyzing short videos on other platforms, particularly those depicting users’ everyday lives and conceptual metaphors that quantitative analysis alone fails to capture.

Framing disinformation through a critical lens (Kuo & Marwick, 2021) brings into relief the challenges that Western democracies face in post-truth and transnational communication era. We see a critical approach to disinformation emphasizing understanding intentionally misleading information within historical, social, and structural contexts. This means our work is as much about gathering data through empirical methods as it is about translating how the history of Australia, and its evolving migrant communities contextualize disinformation into specific contexts and forms, including here in the Voice referendum. We remain concerned with how pre-existing social hierarchies and identity-based prejudices (including racism) are fundamentally related to power in contemporary institutional knowledge production (Kuo & Marwick, 2021).

Topics
Download PDF
Cite this Essay

Yang, F., Heemsbergen, L., & Fordyce, R. (2025). Contextualizing critical disinformation during the 2023 Voice referendum on WeChat: Manipulating knowledge gaps and whitewashing Indigenous rights. Harvard Kennedy School (HKS) Misinformation Review. https://doi.org/10.37016/mr-2020-185

Bibliography

Australian Bureau of Statistics. (2022). Cultural diversity: Census. https://www.abs.gov.au/statistics/people/people-and-communities/cultural-diversity-census/2021

Australian Electoral Commission. (updated 2025). Authorizing electoral communications. https://www.aec.gov.au/About_AEC/Publications/Backgrounders/authorisation.htm

Brancatisano, E. (2023). ‘Extremely politicised’: How ‘very worrying’ Voice misinformation spreads online. SBS News. https://www.sbs.com.au/news/article/extremely-politicised-and-very-worrying-how-misinformation-about-the-voice-spread/w9sl4pzba

Carson A, Grömping M, Gravelle, T. B., Jackman, S., Phillips, J. B., (2024). Alert, but not alarmed: Electoral disinformation and trust during the 2023 Australian voice to parliament referendum. Policy Internet, 17, Article e429. https://doi.org/10.1002/poi3.429

Cheng, J. F. (2020). AIDS, women of colour feminisms, queer and trans of colour critiques, and the crises of knowledge production. In J. F. Cheng, A. Juhasz, & N. Shahani (Eds.), AIDS and the distribution of crises (pp. 76–92). Duke University Press.

Cheng, K., & Liu, M. (2023). 原住民之聲贊成票集中市區 華人區公投貼合主流意見 [The Indigenous Voice Yes votes were concentrated in urban areas, while the referendum results in Chinese communities aligned with mainstream opinions]. SBS News. https://www.sbs.com.au/language/chinese/zh-hant/article/how-has-chinese-districts-voted-in-the-voice-referendum/e1q9ns9eq

de Sola Pool, I (ed.). (1959). Trends in content analysis. University of Illinois Press.  https://www.sbs.com.au/language/chinese/zh-hant/article/how-has-chinese-districts-voted-in-the-voice-referendum/e1q9ns9eq

Fordyce, R. (2023). Yet another computational HASS tool for the investigation of mobile platforms [Software]. Zenodo. https://zenodo.org/records/10215443

Graham, T. (2024). Exploring a post-truth referendum: Australia’s Voice to Parliament and the management of attention on social media. Media International Australia. https://doi.org/10.1177/1329878X241267756

Hunter, L. Y. (2024). Regime characteristics and online government disinformation. Journal of Information Technology & Politics, 1–20. https://doi.org/10.1080/19331681.2024.2445730

Jun, F., Xu, J., Sun, W., & Du, J. T. (2025). Commentary: Engagement of Chinese Australians in the 2023 Australian Indigenous Voice Referendum. Critical Asian Studieshttps://doi.org/10.52698/BOOU8337

Kreiss, D., Laurence, R. G., & McGregor, S. C. (2020). Political identity ownership: Symbolic contests to represent members of the public. Social Media + Society, 6(2), 1–5. https://doi.org/10.1177/2056305120926495

Kuo, R., & Marwick, A. (2021). Critical disinformation studies: History, power, and politics. Harvard Kennedy School (HKS) Misinformation Review 2(4). https://doi.org/10.37016/mr-2020-76

Parliamentary Education Office (n.d.). Why are territory votes only counted in the national majority in a referendum, and could this ever change? https://peo.gov.au/understand-our-parliament/your-questions-on-notice/questions/why-are-territory-votes-only-counted-in-the-national-majority-in-a-referendum-and-could-this-ever-change

Peck, R. (2019). Fox populism: Branding conservatism as working class. Cambridge University Press.

Ruiz, C. D. (2023). Disinformation on digital media platforms: A market-shaping approach. New Media & Society, 27(4), 2188–2211. https://doi.org/10.1177/14614448231207644

Seo, H. & Faris, R. (2021). Special section on comparative approaches to mis/disinformation. International Journal of Communication, 15(2021), 1165–1172. https://ijoc.org/index.php/ijoc/article/view/14799

Soliman, W., & Rinta-Kahila, T. (2024). Unethical but not illegal! A critical look at two-sided disinformation platforms: Justifications, critique, and a way forward. Journal of Information Technology, 39(3), 441–476. https://doi.org/10.1177/02683962231181145

Vivian, S. (2023, August 31). Why NT and ACT votes in the Voice referendum count differently to states. ABC News. https://www.abc.net.au/news/2023-09-01/indigenous-voice-referendum-why-some-votes-count-less-nt-act/102242254

Walker, S. Mercer, D., & Bastos, M. (2019). The disinformation landscape and the lockdown for social platforms, Information, Communication & Society, 22(11), 1531–1543. https://doi.org/10.1080/1369118X.2019.1648536

Wester, F. P. J., Pleijter, A. R. J., & Renckstorf, K. (2004). Exploring newspapers’ portrayals: A logic for interpretive content analysis. Communications, 29(4), 495–513. https://doi.org/10.1515/comm.2004.29.4.495

Williams, D. (2023). Fourth constitutional power claim is first-rate nonsense. AAP FactCheck. https://www.aap.com.au/factcheck/fourth-constitutional-power-claim-is-first-rate-nonsense/

Yang, F., Dai, D., Heemsbergen, L., & Zhang, S. (2025, April 25). How do candidates skirt Chinese social media bans on political content? They use influencers. The Conversation. https://theconversation.com/how-do-candidates-skirt-chinese-social-media-bans-on-political-content-they-use-influencers-253847

Yang, F., Heemsbergen, L., & Fordyce R. (2024). Toward a translational news ecology: Covering the 2022 Australian federal election on WeChat. International Journal of Communication, 18(26), 4883–4908. https://ijoc.org/index.php/ijoc/article/view/22818/4828

Zaller, J. (1991). Information, values, and opinion. American Political Science Review85(4), 1215–1237. https://doi.org/10.2307/1963943

Zeigler, J. (2015). Red Scare racism and Cold War Black radicalism. The University Press of Mississippi.

Funding

This article is an extended version of the initial report funded by Freedom House in 2023.

Competing Interests

The authors declare no competing interests.

Ethics

This project has received ethics approval from the Office of Research Ethics and Integrity at the University of Melbourne (Reference No. 2025-28203-62873-7).

Copyright

This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided that the original author and source are properly credited.

Data Availability

University of Melbourne IRB restrictions do not allow further data sharing.

Acknowledgements

The authors appreciate the editors’ coordination and the reviewers’ feedback.