AI now shapes what we see, hear, and believe—on our feeds, in classrooms, and across boardrooms. This guide shows you how to spot false advertising, deepfakes, and “manufactured expertise”—and how to use AI ethically at home, in school, and at work.
Responsible Use of AI: Why It Matters Today
How Filipinos—now among the world’s top ChatGPT users—can lead in responsible use of AI and innovation.

A Manila Café Moment: When “Real” Isn’t Real
Imagine scrolling your timeline while sipping cappuccino in your favorite coffee shop. A flashy post appears: a well-known celebrity “endorses” a slimming product. The lighting and voice look legit—but something feels off.
Now imagine the same trick with a political figure, your favorite artist, or a star athlete nudging you to buy, vote, or invest.
Welcome to the attention economy of 2025: AI-driven false advertising, deepfake interviews, and synthetic brand amplification—all wearing the mask of “authentic.”
Responsible Use of AI in the 🇵🇭 Philippines and Globally
Philippine Snapshot (Oct 2025):
Recent reporting shows Filipinos are among the top ChatGPT users worldwide—about 42.4% of internet users locally used ChatGPT in the past month, versus a global average of 26.5%. Yet only ~47.2% say they are excited about AI (near global average).
This gap means we’re using AI widely, but we must go deeper on ethics and digital literacy.
Source: The Philippine Star (Oct 20, 2025).
How the Game Is Being Played: AI-Altered Images & Videos on Social Media
AI tools make it trivial to fabricate spokespeople, endorsements, and “caught-on-camera” moments:
- Clickbait & clout-farming: doctored “hot takes” and fake confessions to farm views.
- Brand hijacks: celebrity “endorsements” for sketchy products.
- Election noise: synthetic audio/video to suppress votes or smear rivals.
- NSFW abuse: non-consensual sexualized deepfakes—often targeting women and minors.
Recent, well-documented cases (global)
- Taylor Swift deepfakes triggered platform crackdowns and policy debates; see also AP recap on X limiting searches.
- AI “Biden” robocalls told New Hampshire voters not to vote—charges filed; later the FCC finalized a fine (Reuters).
- Tom Hanks “dental plan” deepfake ad—actor warned fans about a fake AI likeness.
- Platforms awash in deepfake scam ads featuring public figures (Tech Transparency Project).
- Brazilian ring used deepfakes of Gisele Bündchen to sell fake promos (Times of India).
Philippine context
- DICT and COMELEC guidance flagged AI-boosted disinformation and deepfakes in the 2025 cycle.
- WITNESS report: deepfake-driven red-tagging and election disinformation.
- Enforcement: NBI monitoring vloggers/fake-news networks (Philstar).
Bottom line: Synthetic media spreads because it’s cheap, fast, and rewarded by algorithms that monetize engagement—even when truth is the casualty.
“Amplifying Expertise” in Online Interviews (and How It’s Abused)
Remote interviews, webinars, and podcasts are now AI-augmented. That’s great—until it isn’t:
- A “consultant” posts a one-on-one with a Fortune 500 CEO—but it’s an AI-edited composite.
- A personal brand showcases AI-generated B-roll to fake travel, offices, or “global clients.”
- In academia or business, someone claims “I was interviewed by XYZ journal” but the clip is synthetic or credential-padded.
Why it works: visual + audio = trust. People rarely verify. This pattern even targets journalists, whose likeness is reused to sell fake products (Nieman Lab).
Why People Do It
- Simple fun / parody (satire, fan culture)
- Clickbait & growth (ad revenue, follower spikes)
- Brand hacking (fake collabs, invented expertise)
- Academic shortcuts (AI ghostwriting)
- Fraud & theft (impersonation, voice clones, investment scams)
- Propaganda & fake news (vote suppression, smear ops)
The Gray Zone: “Eerily Similar” but Not Exact
Some cases sit between lawful and harmful—e.g., a voice “eerily similar” to a celebrity that isn’t a literal clone. These raise questions of consent, ownership, and ethics (Reuters: Scarlett Johansson vs. “Sky” voice).
Responsible-Use Framework (Personal • Academic • Professional)
Foundations for Everyone
- Label AI: mark “AI-generated/assisted.”
- Don’t impersonate real people.
- No NSFW deepfakes.
- Cite/attribute sources; avoid hallucinations.
- Guard privacy & IP.
- Verify before sharing: reverse-image search, lip-sync check, timeline sanity.
Personal Use
- Keep edits non-deceptive.
- Don’t amplify fake leaks.
- Report deepfakes; limit DMs and downloads.
Academic
- Follow institutional policy.
- Disclose AI aid; keep drafts and notes.
- Cite properly (e.g., “ChatGPT, 2025 prompt on…”).
Professional & Marketing
- Implement an AI Acceptable Use Policy.
- Use human-in-the-loop verification for public content.
- Get written talent consent and watermark AI assets.
- Train staff on election/brand-safety risks (Microsoft Asia).
Signs of Unethical AI Use—and How to Avoid Being Fooled
Quick Tell-Tales
- Mouth–audio mismatch or robotic tone.
- Too-perfect visuals or flickering edges.
- Warped hands, teeth, or text artifacts.
- Unrealistic timelines or reposted footage.
- Anonymous accounts pushing the same content.
How to Protect Yourself
- Check the source: is it on an official channel?
- Reverse-search visuals and confirm dates.
- Verify quotes via legitimate outlets.
- Ignore urgency tactics (“Pay now,” “Join fast”).
- Check regulatory compliance: DTI, DICT, or COMELEC tags for promos.
- Stay digitally literate: production value ≠ authenticity.
- Ask for credentials: request verifiable profiles for job-seekers/speakers.
Policy note: House Bill 3214 (“Deepfake Regulation Act”) seeks to criminalize unauthorized use of likenesses (Asia IP explainer).
✝️ Biblical Lens: Responsibility & Accountability
“From everyone who has been given much, much will be demanded; and from the one who has been entrusted with much, much more will be asked.”
— Luke 12:48 (NIV)
AI is a powerful tool—a modern “talent” entrusted to us. Its responsible use is not just policy but stewardship.
Every click, post, and prompt carries influence; therefore, each of us—user, leader, and creator—must act with integrity.
The ASK Framework and Responsible Use of AI
Responsible use of AI aligns with the ASK Framework (Align, Strengthen, Knit):
- ALIGN – with truth, ethics, and intent. AI should serve clarity, not manipulation. See: How the ASK Framework Bridges Strategy to Everyday Impact.
- STRENGTHEN – digital discernment and moral maturity. Innovation should never outrun wisdom. Also see: Aligning Purpose, Strengthening Culture, and Kickstarting Execution.
- KNIT – human connection and accountability. Behind every algorithm are human consequences.
In short, responsible AI use is Corporate Adulting in the digital age—-balancing freedom with accountability, innovation with ethics, and influence with integrity.
FAQs
Is a parody deepfake of a politician legal?
It depends. Even if parody is protected, platforms and election regulators can take it down if it misleads. See COMELEC/DICT guidance (Baker McKenzie).
Can I use AI to clean up my thesis?
Yes, with disclosure—follow your school’s policy and verify all references.
Is it okay to launch an AI avatar spokesperson for my brand?
Yes, if it’s an original character and properly labeled. Never mimic real people without consent; use provenance/watermarking and human editorial review.
What if I’m targeted by a deepfake?
Take screenshots, file platform reports, alert PR/legal teams, and contact authorities. See NH DOJ case and Philstar NBI report.
🧠 Responsible AI Disclosure
Portions of this article were developed with AI assistance for image creation, research support, and formatting consistency.
All insights were reviewed and finalized by the ASKSonnie.INFO team in alignment with its Responsible Use of AI Framework
.




