Follow any of these channels to receive the latest updates — WhatsApp, Messenger, or Viber. Keep safe!





Last year, we explored how AI became an unavoidable tool in today’s workplace. This year, with the Philippines emerging as one of the top users of generative AI, it’s time we talk about using it wisely — and productively.

Prompt engineering in AI is not just technical — it’s about mindset. The ‘garbage in, garbage out’ principle still defines how we use AI tools.

Understanding Prompt Engineering in AI and Its Real-World Impact

Concept illustration showing how prompt engineering in AI follows the principle ‘Garbage In, Garbage Out,’ with icons representing human input and AI output connected by arrows.

From Curiosity to Competence

In 2024, we learned that AI isn’t something to fear — it’s something to understand. As I wrote in Embracing AI in HR? Take One Step Back Then Two Steps Forward, organizations needed to pause, reflect, and realign their digital strategies before diving head-first into automation.

AI, I emphasized then, should not replace empathy or human discernment — rather, it should augment them.

Fast forward to 2025: the Philippines now ranks among the world’s most active AI users, ranking 6th globally in ChatGPT usage, according to a PhilStar report.

But with increased use comes increased misuse — from deepfakes and misinformation to AI-assisted plagiarism and unverified “expert” content.

AI’s growing accessibility is both empowering and dangerous. The technology amplifies whatever we feed it — and that brings us to a crucial reminder: garbage in, garbage out.

The Need for Responsible AI Leadership

As leaders, especially in HR and people management, we must guide a multi-generational workforce to use AI responsibly.

Younger employees tend to be early adopters — experimenting with tools like ChatGPT, Midjourney, and Claude. Older employees, meanwhile, bring the contextual wisdom and ethical guardrails needed to keep AI grounded.

When these perspectives meet, organizations create an environment where AI enhances human capability rather than replaces it.

But this requires active supervision — AI should never run unsupervised or unexamined. Related read: Leading a Multi-Generational Workforce in the Age of AI.

Productive AI Starts with Prompt Engineering: Why “Garbage In, Garbage Out” Still Matters

In prompt engineering, especially when using generative AI, remember that ‘garbage in, garbage out’ applies more than ever. The clarity of your input defines the accuracy of your AI output.

At the core of responsible and productive AI use lies prompt engineering — the art and discipline of communicating with machines.

Think of prompts as instructions to a very smart but literal assistant. If your input is vague, biased, or incomplete, your output will reflect that. On the other hand, if your prompt is structured, contextualized, and purpose-driven, AI can produce insights, strategies, or creative assets that genuinely add value.

According to Google Cloud’s Prompt Engineering Overview, prompts serve as the bridge between human intention and machine understanding — and that bridge must be built with clarity.

Prompt engineering bridges the gap between human intention and machine execution.

  • When designing social cards and inforaphics, we refine prompts to achieve brand consistency
  • When auditing websites and HTML, we generate Divi-ready markup optimized for search and AI engines.
  • When reviewing legal policies and HR process flows, we use AI for structured analysis but validate results through professional and ethical judgment.
  • When developing training activities or academic rubrics, we prompt for structure, then customize using lived HR and teaching experience.

Each of these cases proves a key principle: AI is only as smart as the person guiding it.

Guiding Principles

  • Clarity over cleverness — Be clear about your objective and context.
  • Context is king — Give AI enough background to understand your goals.
  • Iterate, don’t delegate — Refine prompts as you would coach an intern.
  • Verify and validate — AI assists, but humans decide.

Human Supervision Remains Non-Negotiable

AI can’t think ethically. It doesn’t have empathy or accountability. It processes patterns, not principles.

That’s why human supervision is non-negotiable. Whether crafting marketing campaigns, HR policies, or academic work, AI must operate under human oversight — to ensure truth, integrity, and alignment with organizational values.

In professional contexts — especially those involving corporate governance, health, safety, security, and legal compliancenever follow AI’s lead blindly. If you do not understand the foundational framework or process yourself, do not let AI guide your decision-making.

AI can assist with efficiency and perspective, but it cannot replace sound judgment grounded in expertise, ethics, and accountability. As emphasized in the Responsible Use of AI: ASK Framework Guide, AI must always remain under human authority.

AI can generate content, but only humans can give it conscience.

Garbage In, Garbage Out

The phrase “Garbage In, Garbage Out” has never been more relevant. AI mirrors our input — our clarity, our ethics, our effort.

If we feed it disinformation, it amplifies confusion. If we feed it thoughtful questions, it multiplies insight. In short, the quality of AI output is a reflection of the human behind the prompt.

So, as we move from awareness to mastery, let’s not just use AI — let’s lead with it responsibly.

“Mastering prompt engineering in AI means internalizing one truth: garbage in, garbage out. The smarter your prompt, the smarter your results.”

The ASK Framework in Action

Pillar What It Means in AI Use
A – Align Align your AI strategy with your organization’s values, governance, and purpose.
S – Strengthen Strengthen human-AI collaboration skills — including prompt engineering and ethical review.
K – Kickstart Kickstart responsible AI adoption through training, verification loops, and leadership supervision.

More on this: Why Companies Need an AI Philosophy and Policy

Frequently Asked Questions (FAQ)

What exactly is prompt engineering?

It’s the art and science of crafting precise inputs that guide AI models to produce accurate, relevant, and safe outputs. (Coursera)

Why does “garbage in, garbage out” apply to AI?

Because AI mirrors human input — vague, biased, or incomplete prompts yield poor-quality outputs.

How can professionals build prompt engineering skills?

By learning to structure queries, add context, and refine outputs. Coursera’s overview provides practical examples.

Can AI make decisions in governance, health, or legal contexts?

No. Use AI as an assistant, not an authority. Always validate results with human judgment and subject-matter expertise.

What does effective AI supervision look like in HR?

Human-in-the-loop workflows. For example, AI drafts a policy or rubric, but HR and Legal finalize, approve, and document the output.

How do I ensure my AI use is ethical?

Combine transparency, informed consent, data privacy compliance, and human oversight — guided by your organization’s ASK Framework for Aligning, Strengthening, and Kickstarting responsible practices.

 
 

Editor’s Note: This article is part of ASKSonnie.INFO’s Responsible AI and Leadership Series. Explore related insights on AI Philosophy and Policy and the Responsible Use of AI Guide 2025.

 



Liked this article? You can buy us a coffee, or subscribe to any of these channels to access exclusive resources — WhatsApp, Messenger, or Viber.

Subscribe
Notify of

0 Comments
newest
oldest
Inline Feedbacks
View all comments