The Philippines has the highest AI adoption rate in ASEAN. 92%, according to the 2025 Philippine AI Report. That number sounds impressive until you read the next line.
65% of those organizations are stuck in pilot.
Not scaling. Not in production. Piloting. Running POCs that never graduate. Building demos that never see real users. And the pattern repeats: another hackathon, another pitch deck, another POC competing for the same thin use case.
I’ve watched this from the inside for two years — running an enterprise consulting firm in Manila while building AI operations infrastructure on the side. What I’ve seen is a country full of smart builders solving the wrong layer of the problem.
The POC Trap
Here’s what the typical Philippine AI journey looks like:
Someone discovers Lovable, Replit, or Bolt. They build something in a weekend — a chatbot, a document processor, a “smart” dashboard. It works. They demo it. Maybe it wins a competition.
Then reality hits.
The app needs to handle more than 10 users. It needs to connect to an actual database that isn’t a Google Sheet. It needs authentication, logging, error handling, monitoring. It needs to run when the builder isn’t watching.
And this is where 65% of Philippine AI projects die. Not because the idea was bad. Not because the builder wasn’t talented. Because nobody planned for what happens after “it works on my machine.”
The problem isn’t intelligence. The Philippines has no shortage of skilled developers. The problem is that the entire ecosystem is optimized for building demos, not running systems.
What an AI Expert in the Philippines Actually Sees
It’s not a tools problem. The Philippines doesn’t need another chatbot builder, another “AI-powered” SaaS product competing for the same thin use case.
What’s missing is the boring stuff. The stuff that doesn’t win hackathons or trend on LinkedIn:
1. Context management. LLMs forget everything between conversations. If your AI system can’t maintain context across sessions — what your organization has decided, what’s been tried, what failed — you’re starting from zero every time. I wrote about this in Context Engineering: it’s infrastructure, not prompting.
2. Anti-fabrication. AI makes things up. Everyone knows this. Almost nobody builds mechanical systems to catch it. Every data point needs a source. Every claim needs evidence. Every “I don’t know” needs to actually say “I don’t know” instead of guessing confidently. This isn’t a prompt engineering problem — it’s an architecture problem.
3. Operational persistence. Your AI assistant is useless if it loses its memory every time the session ends. The knowledge captured last Tuesday needs to be available next Thursday. The decision made in one project needs to inform work in another. This requires persistent storage, indexing, retrieval systems — none of which come free with an API key.
4. Multi-system orchestration. Real work doesn’t happen inside a single app. It happens across Salesforce, Jira, Google Workspace, SSH connections, deployment pipelines. An AI that can write a nice email but can’t check your CRM, update your tickets, or deploy code isn’t an operations system — it’s a toy.
5. Failure handling. What happens when the AI gets stuck? What happens when it loops? What happens when it fabricates a method name that doesn’t exist and tries to call it? Production AI systems need circuit breakers, escalation paths, and the humility to stop and ask for help. I covered the real cost of getting this wrong in what autonomous agents actually cost in production.
The Infrastructure Nobody’s Building
I run Aether Global Technology Inc. — a Salesforce consulting firm in Manila. I’m the CEO who still admins his own Jira and maintains his own servers. Not by choice, originally — by necessity.
When we deployed an enterprise platform across three call centres for a major Philippine airline in 89 days, the challenge wasn’t the technology — it was making it work reliably at scale, across teams, under real production pressure. That experience shapes how I think about AI infrastructure today.
Separately from client work, I’ve spent the past year building a personal AI operations system — a living R&D lab that I use every day to run my own work. Not a product. Not something I sell. A daily driver that I built because nothing on the market solved the actual problem: how do you run a complex operation when you’re wearing 10 hats and can’t afford to lose context?
The answer wasn’t a better chatbot. It was infrastructure.
Persistent memory that survives session boundaries. Mechanical gates that prevent fabrication. Agent orchestration that coordinates work across platforms. Failure handling that stops loops before they waste hours. All of it built for one user — me — and iterated on daily.
I didn’t build this because it was trendy. I built it because I was drowning without it.
Why the Philippines Needs AI Operations, Not More AI Apps
The Philippines is at 92% AI adoption. That’s remarkable. But adoption isn’t the same as capability.
The country has over a thousand AI startups. Seven just went to GITEX Asia. The government launched the National AI Centre (NAICRI). There are 19 AI bills pending in Congress. The numbers look good.
But underneath?
- Most organizations use GenAI for drafting emails and internal memos
- Only 12% use heavy-duty development frameworks
- Brain drain pulls senior engineers overseas
- Electricity costs are among the highest in Southeast Asia
- No national AI coordinating body exists yet
The gap isn’t tools or talent. It’s systems thinking.
The Philippines doesn’t need more AI apps. It needs AI experts who understand what happens between “the demo works” and “it runs reliably in production, unsupervised, at scale.” That’s not a technology problem — it’s an engineering discipline.
And right now, very few people in the Philippine AI ecosystem are talking about it.
The Survival Engineer Frame
I call this “survival engineering.” Not because it sounds good — because it’s accurate.
When you’re a CEO of a small firm competing against companies 50 times your size, you don’t have the luxury of building AI for fun. You build it because if you don’t, you literally cannot keep up with the workload. Every system you build has to work tomorrow morning. Every automation has to handle the edge case your client throws at you on Friday at 5pm.
This is different from building a POC for a pitch deck. This is different from winning a hackathon. This is production AI for a practitioner who can’t afford downtime.
The Philippines has plenty of people who can build the demo. What it needs are AI experts who can build what comes after — the AI operations infrastructure that keeps systems running, learning, and scaling without burning everything down.
What I’d Tell a Filipino Builder Starting Today
Stop building apps. Start building systems.
- Don’t ship a chatbot. Ship a chatbot with persistent memory, source attribution, and failure handling.
- Don’t build another POC. Build something that orchestrates three existing tools into a workflow that didn’t exist before.
- Don’t optimize your prompt. Optimise the infrastructure that feeds your prompt the right context at the right time.
- Don’t chase the next framework. Master the boring fundamentals: error handling, logging, state management, graceful degradation.
The market is flooded with people who can build v1. Very few — in the Philippines or anywhere — can keep v1 running, improve it to v2, and scale it to v10 without burning everything down.
That’s where the real value is. And that’s the kind of AI expertise the Philippines actually needs.
If you’re building AI systems for Philippine enterprises and want to compare notes, I’m always up for a conversation.
Tom Tokita is the Co-Founder, President and CEO of Aether Global Technology Inc., a Salesforce consulting firm in Manila. He builds AI operations infrastructure as a personal R&D practice — not as a product, but as a practitioner’s daily tool for running a complex enterprise operation. Connect at tom@tokita.online.



