Follow any of these channels to receive the latest updates-- WhatsApp, Messenger, or Viber. Keep safe!



In 2013, I led a seminar on crafting Internet and Social Media Policies to protect organizations from reputational and legal risks. Over a decade later, the frontier has shifted — this time, to Artificial Intelligence (AI).

Fast forward 12 years, the landscape has changed — and so have the risks. Today, the new frontier is Artificial Intelligence.

AI tools — from generative chatbots to deepfakes — are already embedded in how people work, whether management has approved them or not.

The challenge for leaders is clear: How do we harness AI responsibly while protecting our organization’s reputation, compliance, and trust?

That’s why we need more than just rules. We need an AI Philosophy anchored on values, and an AI Policy to set guardrails. Using the ASK Framework (Awareness → Strategy → Knowledge), here’s how organizations can prepare.

 

An image expressing AI Governance Framework

 
 

Awareness: Why AI Philosophy and Policy Matter Now

 

Philippine Cases in AI Governance

 

  • 🔥 The “Burning Truck” Hoax (2025) — Firefighters in Manila rushed to a scene after seeing an AI-generated photo of a truck on fire. It looked real — but there was no fire. This wasted resources and showed how AI misinformation can mislead even professionals. (GMA News)
  • 🏛️ Sandiganbayan AI Pleadings (2025) — A lawyer submitted pleadings partly drafted by AI, filled with fabricated case citations. The court reminded lawyers: AI cannot replace human accountability. (PhilStar)
  • 🛒 Fake AI Startup (2025) — A shopping app marketed as “AI-powered” turned out to be human-run. Its founder faces fraud charges. (Cybernews)
  •  🪖 Defense Department AI Ban (2023): The DND barred soldiers from using AI portrait apps, citing data privacy and national security concerns. (AP News)

 


Global Cases in Responsible AI Use

 

  • ✈️ Air Canada (2024) — A tribunal ruled the airline was liable after its chatbot gave false refund advice. Lesson: Companies are responsible for their AI outputs. (ABA)
  • 📱 Samsung (2023) — Engineers uploaded confidential source code into ChatGPT. The company responded by banning generative AI on work devices. (TechCrunch)
  • 📰 CNET (2023) — AI-generated finance articles were found full of factual errors and plagiarism. The brand paused the program and issued corrections. (The Verge)

 
 

Strategy: From Rules to an AI Governance Framework

 

Creating an AI policy is not just a compliance exercise — it must reflect who you are as an organization. Here are key steps:

  • Anchor AI on company values — create an AI philosophy that aligns with Vision–Mission–Values.
  • Co-create policy across functions — HR, IT, Legal, and Communications must collaborate.
  • Scenario planning — simulate “AI gone wrong” situations, from fake fire alerts to biased hiring algorithms.
  • Educate and empower employees — train staff on responsible AI use and prohibited practices.
  • Establish oversight — form an AI Ethics Committee or designate compliance officers.
  • Annual review — update AI policies in line with Philippine Data Privacy laws, NPC advisories, and global regulations like the EU AI Act.

 

This is what I call moving from rules to philosophy: making sure AI use is guided by culture, not just compliance.

 
 

Knowledge: Checklist for AI Policy

 

A practical AI Policy Checklist includes:

  • Scope & Definitions
  • Permitted vs Prohibited Use
  • Data Privacy & NPC Compliance
  • Transparency & Disclosure (flagging AI-assisted work)
  • Bias, Fairness & Human Oversight
  • IP & Content Ownership
  • Security & Access Controls
  • Crisis & Incident Response
  • Disciplinary Measures
  • Annual Policy Review

 

Risks vs Benefits of AI Use

 

Domain Benefits Risks
Customer Service 24/7 support, efficiency Chatbot errors → liability (Air Canada)
Productivity Faster drafts, summaries Data leakage via shadow AI (Samsung)
Marketing & PR Scalable content creation Errors, plagiarism (CNET)
HR & Hiring Automated screening Algorithmic bias, discrimination
Legal Faster drafting Fake citations (Sandiganbayan)
Public Safety Faster insights AI hoaxes waste resources (Burning Truck)
Healthcare Assist diagnostics Bias, privacy concerns
Reputation Innovator image Legacy content resurfacing (CBRE)

 

Final Thoughts

 

Back in 2013, I said: “You can block Facebook in the office, but you cannot block the 21st century.”  In 2025, the same is true of AI. Banning AI isn’t realistic.

The only way forward is to build an AI Philosophy rooted in values, and an AI Policy with clear guardrails. That’s how we can turn AI into a driver of growth — not a trigger for crisis.

 
How is your organization preparing for AI governance? Do you already have an AI Philosophy and Policy, or is this still on your to-do list?
 
 
 


Discover more from ASKSonnie.INFO

Subscribe to get the latest posts sent to your email.



Liked this article? You can buy us a coffee, or subscribe to any of these channels to access exclusive resources-- WhatsApp, Messenger, or Viber.



Discover more from ASKSonnie.INFO

Subscribe now to keep reading and get access to the full archive.

Continue reading