Regulating online hate speech and harmful content in Aotearoa New Zealand - beyond criminalisation and towards a statutory duty of care
Authors
Loading...
Files
Permanent Link
Publisher link
Rights
All items in Research Commons are provided for private study and research purposes and are protected by copyright with all rights reserved unless otherwise indicated.
Abstract
This thesis examines how New Zealand regulates online hate speech and harmful content, and evaluates whether current law provides effective protection in the digital environment. The study considers how social-media platforms shape the spread of harmful expression and assesses whether New Zealand’s existing legal framework is equipped to respond to these risks while maintaining the right to freedom of expression. The central question guiding the research is whether the present approach is adequate, and what reforms may be needed to address harm more effectively.
The thesis adopts an interpretivist and qualitative methodology, drawing on doctrinal, socio-legal, comparative, and political-legal methods. It uses behavioural, regulatory, and normative theories, including the Online Disinhibition Effect, modalities of regulation, and dignity- and equality-based approaches to free expression. These perspectives help explain why harmful content escalates so quickly online and why traditional legal tools struggle to respond.
The analysis proceeds in three stages. First, it examines the operation of digital platforms, focusing on algorithmic amplification, design choices, and the limits of automated moderation. Second, it reviews New Zealand’s legal framework, including the Harmful Digital Communications Act 2015, the Human Rights Act 1993, and the New Zealand Bill of Rights Act 1990. This review shows that the current system is reactive, fragmented, and heavily dependent on voluntary platform policies. Third, the thesis draws comparative insights from the United Kingdom, Australia, France, Germany, and the European Union, where more proactive models, particularly statutory duties of care and transparency obligations, have begun to address platform-level risks.
The research concludes that New Zealand’s present approach does not adequately respond to systemic and group-based harms. It argues that a statutory duty of care, supported by risk-assessment requirements, algorithmic transparency, and proportionate safeguards for freedom of expression, offers a more effective and balanced framework. The thesis contributes to existing scholarship by integrating behavioural and regulatory theory with comparative legal analysis and by proposing a model of platform accountability tailored to Aotearoa New Zealand’s legal context and human rights commitments.
Citation
Type
Series name
Date
Publisher
The University of Waikato