Checklist for 3.8: Members of the Public
Objective
Safeguard personal privacy and digital rights when interacting with AI systems, understand the implications of AI on daily life, and participate in public discourse on responsible AI use (Mubeen, 2025; PCPD, 2025a).
Related to Part 2 Sub-Point: 2.4 Dynamic Consent Management and User Empowerment; 2.3 Transparency and Explainability.
Key Actions
Stay informed about how AI technologies collect, use, and share personal data, and understand your rights under relevant privacy laws.
Example: Review public resources, privacy notices, and media reports on AI and privacy (OAIC, 2024; PCPD, 2025a).
Related to Part 2 Sub-Point: 2.3 Transparency and Explainability.Exercise your rights to access, correct, or delete personal data held by organizations using AI, and provide or withdraw consent as appropriate.
Example: Submit data access requests or opt out of data processing where available (Mubeen, 2025).
Related to Part 2 Sub-Point: 2.4 Dynamic Consent Management and User Empowerment.Be vigilant about AI-driven misinformation, bias, and surveillance, and report concerns to regulators or consumer protection bodies.
Example: Notify authorities or organizations if you encounter discriminatory or invasive AI practices (Stanford HAI, 2025; PCPD, 2025a).
Related to Part 2 Sub-Point: 2.5 Bias Mitigation and Fairness Audits.Avoid sharing sensitive personal information with public or untrusted AI tools, especially generative AI chatbots or platforms.
Example: Do not input financial, health, or identification data into public AI systems (OAIC, 2024).
Related to Part 2 Sub-Point: 2.2 Data Minimization and Robust Access Controls.Participate in community discussions, public consultations, or advocacy efforts to shape responsible AI governance and policy.
Example: Engage in forums, submit feedback to regulators, or join digital rights organizations (PCPD, 2025a; TwoBirds, 2025).
Related to Part 2 Sub-Point: 2.10 Regulatory Compliance and Adaptive Governance.
Metrics for Success
Increase in the number of individuals exercising data rights (access, correction, deletion, consent changes) annually (Mubeen, 2025).
Related to Part 2 Sub-Point: 2.4 Dynamic Consent Management and User Empowerment.Growth in public awareness of AI privacy risks and rights, as measured by surveys and participation in public consultations (PCPD, 2025a).
Related to Part 2 Sub-Point: 2.9 Cross-Functional Collaboration and Training.Reduction in the number of incidents where sensitive personal data is shared with public AI tools (OAIC, 2024).
Related to Part 2 Sub-Point: 2.2 Data Minimization and Robust Access Controls.
Common Pitfalls to Avoid
Ignoring privacy notices or failing to review how AI systems process personal data (OAIC, 2024).
Related to Part 2 Sub-Point: 2.3 Transparency and Explainability.Providing sensitive information to public or untrusted AI platforms without understanding the risks (Stanford HAI, 2025).
Related to Part 2 Sub-Point: 2.2 Data Minimization and Robust Access Controls.Not reporting AI-related privacy or discrimination issues, missing opportunities to improve AI systems and protect community interests (PCPD, 2025a).
Related to Part 2 Sub-Point: 2.5 Bias Mitigation and Fairness Audits.
References
Mubeen,
M. (2025, February 25). Privacy concerns in AI: Navigating the
landscape in 2025.
https://www.linkedin.com/pulse/privacy-concerns-ai-navigating-landscape-2025-marium-mubeen-dr3ef
Office of the Australian Information Commissioner (OAIC). (2024, October 15). Guidance on privacy and the use of commercially available AI products. https://www.oaic.gov.au/privacy/privacy-guidance-for-organisations-and-government-agencies/guidance-on-privacy-and-the-use-of-commercially-available-ai-products
PCPD. (2025a, March 31). Checklist on guidelines for the use of generative AI by employees. Privacy Commissioner’s Office. https://www.pcpd.org.hk/english/news_events/media_statements/press_20250331.html
Stanford HAI. (2025, April 24). AI data privacy wake-up call: Findings from Stanford’s 2025 AI Index Report. https://www.kiteworks.com/cybersecurity-risk-management/ai-data-privacy-risks-stanford-index-report-2025/
TwoBirds. (2025, April 3). Gen AI at work: Hong Kong Privacy Commissioner publishes further guidance. https://www.twobirds.com/en/insights/2025/china/gen-ai-at-work-hong-kong-privacy-commissioner
No comments:
Post a Comment