Listen to the article

0:00
0:00

As concerns about artificial intelligence’s impact on businesses and society grow, industry leaders are increasingly recognizing the need for thoughtful policies that prioritize human interests while harnessing AI’s potential benefits.

This week, Gemma Mendoza, Rappler’s head of digital services and lead researcher on disinformation and platforms, conducted a specialized workshop guiding business executives through the complex process of developing people-centered AI policies for their organizations.

The workshop focused on balancing technological advancement with ethical considerations, addressing key questions about how AI systems should be designed, implemented and regulated to serve human interests rather than undermine them.

“Organizations need to think critically about AI implementation beyond just efficiency gains,” Mendoza explained during the session. “The policies we develop today will shape how these powerful tools impact our employees, customers, and society at large.”

Participants worked through various frameworks for assessing AI risks specific to their industries, with particular attention to potential biases, privacy concerns, and transparency issues that could affect stakeholders. The workshop emphasized that effective AI governance requires input from diverse perspectives beyond just technical teams.

The timing of this initiative reflects growing global concern about unregulated AI development. Recent surveys indicate that while 68% of businesses plan to increase AI investments this year, only 41% have comprehensive policies guiding its ethical use, creating a governance gap that experts warn could lead to unintended consequences.

Rappler, a digital news organization based in the Philippines, has positioned itself at the intersection of media, technology and civic engagement. The company’s focus on disinformation research gives it unique insights into how automated systems can influence information ecosystems and public discourse.

“Media organizations have been on the frontlines of witnessing how algorithms and automated systems can affect information flow and public trust,” Mendoza noted. “These lessons are valuable for businesses developing their own AI strategies.”

The workshop highlighted several key principles for people-first AI policies, including maintaining meaningful human oversight of automated systems, ensuring transparency about when and how AI is being used, and establishing clear accountability structures for AI-related decisions.

Business leaders were also encouraged to consider how their AI implementations might affect different stakeholder groups differently, with particular attention to potentially vulnerable populations who might be disproportionately impacted by algorithmic decisions.

Industry analysts note that such proactive policy development represents a shift in corporate thinking about technology governance. “We’re seeing more companies recognize that waiting for regulation isn’t enough,” said Marissa Chen, a technology policy expert not involved in the workshop. “Forward-thinking organizations are establishing their own ethical guidelines that often go beyond legal requirements.”

The economic stakes are significant. The global AI market is projected to reach $190 billion by 2025, with applications spanning virtually every sector from healthcare and finance to retail and manufacturing. However, public trust in AI systems remains fragile, with research showing that perceived ethical issues can significantly impact consumer and employee relationships with companies deploying these technologies.

Workshop attendees received practical resources for developing their own organizational policies, including assessment tools, stakeholder engagement strategies, and implementation frameworks that can be customized for different business contexts.

“The goal isn’t to create barriers to innovation,” Mendoza emphasized, “but to ensure that innovation moves in directions that truly benefit people and society.”

As governments worldwide work to establish regulatory frameworks for AI, business-led initiatives like this workshop represent an important complement to public policy efforts, potentially establishing industry norms and best practices that could inform more formal regulations.

The workshop concluded with companies developing action plans for policy creation, with many participants committing to inclusive processes that would involve employees, customers, and other stakeholders in shaping their approach to AI governance.

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

16 Comments

  1. This is an important step in the right direction. Too often, the rollout of new technologies like AI is driven solely by efficiency gains without enough thought given to the human impact. I’m glad to see that changing.

  2. James Williams on

    I’m really glad to see industry leaders engaging with these critical issues around the societal impact of AI. Balancing innovation and efficiency with ethical considerations is essential. Kudos to Rappler for facilitating this important dialogue.

  3. Kudos to the Rappler team for leading this workshop. AI has immense potential, but it needs to be developed and deployed responsibly. I hope the frameworks and insights shared here can serve as a model for other organizations.

  4. The workshop’s emphasis on balancing technological advancement with ethical concerns is exactly what’s needed. Organizations can’t afford to just chase the latest AI innovations without carefully considering the societal implications.

  5. This workshop sounds like a valuable initiative. Establishing frameworks to assess and mitigate AI-related risks around bias, privacy, and transparency is crucial. I hope the insights shared here can serve as a model for other businesses.

  6. It’s great to see Rappler taking a proactive approach to this issue. Developing people-centric AI policies is crucial as these technologies become more pervasive. I’m curious to learn more about the specific risk assessment methodologies they’re advocating.

  7. Developing thoughtful AI policies that prioritize human interests is so important as these technologies become more pervasive. I’m glad to see industry leaders taking this issue seriously and seeking guidance on how to approach it responsibly.

    • Absolutely. Balancing technological progress with ethical considerations is crucial. AI has immense potential, but it needs to be implemented in a way that truly serves people, not undermines them.

  8. Amelia R. Garcia on

    Developing ethical AI frameworks is a complex challenge, but a necessary one. I appreciate Rappler’s focus on addressing key issues like bias, privacy, and transparency. These are critical considerations as AI becomes more ubiquitous.

  9. William Miller on

    Kudos to Rappler for leading this important workshop. Building people-centric AI policies will require ongoing collaboration between technologists, ethicists, policymakers, and diverse stakeholders. I’m glad to see this process underway.

  10. It’s heartening to see industry leaders proactively addressing the societal impacts of AI. Prioritizing human interests over pure efficiency gains is the right approach in my view. I hope more organizations follow suit.

  11. Isabella White on

    Developing thoughtful AI policies that protect human interests is so important as these technologies become more ubiquitous. I’m encouraged to see Rappler taking a leadership role in guiding organizations through this complex process.

  12. Elizabeth Q. Davis on

    This workshop is a step in the right direction. As AI systems become more sophisticated and integrated into business operations, it’s vital that organizations prioritize ethics and human interests. I hope more leaders follow Rappler’s example.

  13. It’s great to see Rappler taking a proactive approach to promoting people-centric AI policies. As these technologies become more integral to business operations, it’s vital that organizations prioritize ethical considerations alongside efficiency gains.

  14. This workshop sounds like a valuable initiative. I’m curious to learn more about the specific frameworks Rappler is using to assess AI-related risks. Identifying and mitigating potential biases, privacy issues, and transparency problems will be key.

    • Patricia O. Rodriguez on

      Yes, I’d be interested in seeing the details of those frameworks. Developing robust guidelines for responsible AI development and deployment is critical as these technologies become more integral to businesses and society.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.