Regulatory Insights
Responsible AI in Indonesia: Corporate Accountability, Ethical Governance, and Fair Workforce Practices
Artificial intelligence (AI) has quickly moved from experimental to indispensable. Whether embedded in recruitment software, customer analytics, or automated decision-making tools, AI systems now shape how businesses operate and how employees and consumers are evaluated.
For companies operating in Indonesia—especially those with regional or multinational footprints—the emergence of AI presents both strategic advantage and escalating legal risk driving the momentum toward “responsible AI” both at national level or global level.
The Indonesian government has been attempting to adopt the “responsible AI” move in its policy discourse; aiming to align the rapid technological innovation with existing frameworks for data protection, employment, and corporate governance.
The question is no longer whether AI should be adopted, but how it can be deployed responsibly, lawfully, and with appropriate oversight.
I. Understanding Responsible AI: From Ethics to Compliance
“Responsible AI” is an umbrella term encompassing the principles of fairness, transparency, accountability, privacy, and human oversight in AI deployment. It emerged as a response to concerns about algorithmic bias, opaque decision-making, and the displacement of human judgment.
In Indonesia, the Ministry of Communication and Informatics (“Kominfo”), together with the National Research and Innovation Agency (“BRIN”), has prepared an AI Ethics Guideline emphasizing accountability, inclusiveness, and protection of human values. Although not yet legally binding, it signals the government’s expectation that AI development and use should reflect Pancasila values and human rights principles.
Companies should expect future codification of these standards through sectoral regulations or amendments to the Electronic Information and Transactions (“ITE”) Law and its implementing regulations.
From a compliance perspective, responsible AI in Indonesia will likely rest on these legal pillars:
- Data Protection: under Law No. 27 of 2022 on Personal Data Protection (“PDP Law”), governing consent, processing, and automated decision-making involving personal data.
- Corporate Governance: under Law No. 40 of 2007 (and its supplementing regulations) on Limited Liability Companies, which imposes fiduciary duties of prudence and responsibility on directors—including in technology adoption.
However, it is worth noting that each use case of AI could invite additional legal consideration. As an example, the use of AI in hiring process should consider Employment and Anti-Discrimination: derived from Law No. 13 of 2003 (“Manpower Law”) and Law No. 8 of 2016 on Persons with Disabilities (“Disabilities Law”), requiring fair and non-discriminatory treatment in hiring and workplace management.
II. Corporate Accountability and the Board’s Role
For corporate leaders, AI risk is no longer purely operational—it is strategic and reputational. The adoption of AI tools must therefore fall within the scope of directors’ and commissioners’ fiduciary duties.
The use of AI systems that result in discriminatory decisions, data breaches, or misleading outputs could arguably constitute a breach of such duty if the board failed to exercise adequate oversight or due diligence prior to adoption.
This aligns with emerging global governance expectations, where directors are required to understand and manage digital risks, not merely delegate them to technical teams.
From a governance standpoint, Indonesian boards should consider the following measures:
- AI Risk Assessment Framework: Prior to deploying AI, companies should assess potential legal, ethical, and operational risks—particularly in areas involving personal data, consumer profiling, or employment decisions.
- AI Ethics Committee or Officer: Appointing a cross-functional team (comprising legal, HR, compliance, and IT) to review AI projects against internal and regulatory standards.
- Transparency and Documentation: Maintaining records of algorithmic design, data sources, and decision logic to support accountability in the event of disputes or audits.
- Vendor and Model Governance: Where AI systems are procured from third parties, companies remain accountable for compliance. Contracts should include clauses on data use, audit rights, and error accountability.
In practice, many Indonesian companies—especially financial institutions and tech firms—have begun adopting internal “AI governance charters.” These documents typically outline the company’s approach to fairness, accuracy, and human oversight. For in-house counsel, aligning such charters with the company’s risk appetite and compliance framework is critical to ensuring they have real, rather than symbolic, effect.
III. Responsible AI in Hiring and Workforce Management
AI-powered recruitment tools are increasingly common among Indonesian employers, particularly those with regional operations or large-scale hiring needs. Algorithms are now used to screen résumés, assess video interviews, and even predict cultural fit. While these tools promise efficiency, they raise complex legal and ethical questions—especially concerning discrimination and privacy.
Legal Considerations under Indonesian Employment Law
Under Articles 5 and 6 of Manpower Law, every worker has equal opportunity without discrimination based on gender, ethnicity, religion, or political orientation. Similarly, the Disabilities Law prohibits discriminatory recruitment practices.
If an AI system inadvertently screens out candidates from protected groups—because of biased training data or design flaws—the employer could face allegations of unlawful discrimination.
The challenge lies in the opacity of many AI models: employers often cannot fully explain why an algorithm recommended or rejected a candidate. This lack of transparency conflicts with the principle of “explainability,” a cornerstone of responsible AI and a likely requirement in future Indonesian regulation.
Data Protection and Candidate Consent
Recruitment AI systems often process sensitive personal data—photos, voice samples, or psychometric information—qualifying as personal data under the PDP Law.
Under Article 20(2) of the PDP Law, data subjects have the right to object to automated decision-making that has legal or significant effects. Employers using AI-driven hiring tools must therefore provide clear disclosure and obtain explicit consent, including an explanation of the logic and potential consequences of automated processing.
Non-compliance can lead to administrative sanctions or civil claims, including compensation for unlawful data processing. In-house counsel should ensure that privacy notices and consent forms explicitly mention the use of AI tools in recruitment and define the extent of automated decision-making.
Mitigating Bias and Ensuring Fairness
Beyond legal compliance, responsible AI in hiring also demands active mitigation of algorithmic bias. Practical measures include:
- Bias Testing: Regularly auditing models to detect disproportionate outcomes across gender, ethnicity, or other protected attributes.
- Human Oversight: Ensuring human review in final hiring decisions; AI should inform, not determine, employment outcomes.
- Model Explainability: Selecting vendors who can provide interpretable AI models or audit documentation.
- Training for HR Teams: Equipping human resource professionals to understand AI limitations and avoid overreliance.
The use of AI in hiring, while nascent, is likely to attract scrutiny as part of broader discussions on fairness and workplace equality. Firms that act early to embed transparency and ethical oversight will gain reputational advantage and mitigate litigation risk.
IV. Practical Guidance for In-House Counsel
In-house legal teams play a pivotal role in shaping corporate AI governance. Beyond ensuring compliance, they must translate abstract ethical principles into operational reality.
Practical steps include:
- Map AI Use Cases: Catalogue all AI and automated decision systems used within the organization—from HR and marketing analytics to customer service bots.
- Conduct Legal Impact Assessments: Evaluate each system against data protection, employment, and consumer laws. Identify risks of bias, discrimination, or privacy breach.
- Draft AI Governance Policy: Establish internal procedures for model approval, monitoring, and review. Incorporate alignment with Kominfo’s AI ethics principles.
- Incident Response: Establish a protocol for responding to AI-related errors or data incidents, including notification to affected individuals and regulators under the PDP Law.
- Vendor Due Diligence: Review AI vendor terms for liability, transparency, and data handling obligations. Negotiate warranties or audit rights where possible.
- Employee Awareness: Train HR, marketing, and operations teams to recognize ethical boundaries and legal duties when using AI tools.
- Board Engagement: Ensure AI risk management features in board agendas and enterprise risk frameworks.
Ultimately, the responsibility for AI outcomes cannot be outsourced to algorithms or third-party vendors. The corporate entity—and by extension, its directors—remain accountable for compliance and ethical stewardship.
Closing note
The age of responsible AI is not a distant vision—it is already unfolding. In Indonesia, where digital transformation intersects with evolving legal norms, companies face both opportunity and obligation. Adopting responsible AI means integrating ethical principles into corporate governance, ensuring transparency in human resource management, and anticipating regulatory change before it mandates compliance.
For directors, in-house counsel, and compliance leaders, the path forward is clear: treat AI not merely as a technological tool, but as a governance subject demanding prudence, accountability, and foresight.
By embedding these values early, Indonesian companies can build the foundation for sustainable innovation—one where trust, fairness, and competitiveness coexist.
Read our article about Domestic Component Level (TKDN) here https://murzallawfirm.com/moi-regulation-no-35-of-2025-a-comprehensive-reform-of-indonesias-domestic-component-regime/
————- III ————-
MURZAL & PARTNERS
For more information, please reach us at Murzal & Partners Law Firm to:
e-Mail: info@murzallawfirm.com
Jakarta Office Phone : +62 21 515 2505
Bali Office Phone : +62 361 620 9986
LinkedIn: Murzal & Partners Law Firm
Disclaimer:
The foregoing material is the property of MNP and may not be used by any other party without prior written consent. The information herein is of general nature and should not be treated as legal advice, nor shall it be relied upon by any party for any circumstance. Specific legal advice should be sought by interested parties to address their particular circumstances.
Any links contained in this document are for informational purposes and are available and relevant at time this publication is made. We provide no liability whatsoever in respect of any information or content in such links.