{"id":209,"date":"2026-05-05T08:35:56","date_gmt":"2026-05-05T08:35:56","guid":{"rendered":"https:\/\/tokita.online\/what-is-harness-engineering\/"},"modified":"2026-05-05T08:55:21","modified_gmt":"2026-05-05T08:55:21","slug":"what-is-harness-engineering","status":"publish","type":"post","link":"https:\/\/tokita.online\/what-is-harness-engineering\/","title":{"rendered":"I Didn&#8217;t Know I Was Doing Harness Engineering"},"content":{"rendered":"<p>In early February 2026, <a href=\"https:\/\/mitchellh.com\/writing\/my-ai-adoption-journey\" target=\"_blank\" rel=\"noopener\">Mitchell Hashimoto<\/a> (co-founder of HashiCorp) described his habit of engineering permanent fixes into an AI agent&#8217;s environment whenever it made a mistake. He called it &#8220;engineering the harness.&#8221; Days later, <a href=\"https:\/\/openai.com\/index\/harness-engineering\/\" target=\"_blank\" rel=\"noopener\">OpenAI formalized the concept<\/a> in a blog post. Around the same time, without having read either, I wrote my first enforcement hook for a production AI system. Different continent, different scale, different context. Same problem.<\/p>\n<p>A few weeks later, Birgitta B&ouml;ckeler <a href=\"https:\/\/martinfowler.com\/articles\/harness-engineering.html\" target=\"_blank\" rel=\"noopener\">formalized it on Martin Fowler&#8217;s site<\/a>. Red Hat published their version. LangChain. Salesforce. By April, the term was everywhere.<\/p>\n<p>I didn&#8217;t discover any of this until recently. I was too busy building the thing they were naming.<\/p>\n<p>That&#8217;s not a flex. It&#8217;s something more interesting. When engineers face the same constraints (unreliable model outputs, production stakes, context that evaporates), they converge on the same solutions. Different trails, same summit. And if your messy pile of rules and scripts looks suspiciously like what OpenAI and Fowler describe, that&#8217;s not coincidence. It&#8217;s validation.<\/p>\n<h2>What Is Harness Engineering (And Why It Matters for AI Agents)<\/h2>\n<p>Harness engineering is the discipline of building the constraints, gates, memory systems, and feedback loops that wrap around an AI agent to make it reliable in production. The core equation, from Martin Fowler&#8217;s team: <strong>Agent = Model + Harness.<\/strong> The harness is everything around the model that you actually control. If <a href=\"\/context-engineering-vs-prompt-engineering\/\">context engineering<\/a> is about what reaches the model, harness engineering is about what constrains it after it responds.<\/p>\n<p><a href=\"https:\/\/developers.redhat.com\/articles\/2026\/04\/07\/harness-engineering-structured-workflows-ai-assisted-development\" target=\"_blank\" rel=\"noopener\">Red Hat<\/a> puts it differently. &#8220;The AI writes better code when you design the environment it works in.&#8221; Their framing is about structured workflows. Templates. Impact maps. Acceptance criteria.<\/p>\n<p>Both are right. Neither is complete.<\/p>\n<p>They describe the architecture. They don&#8217;t describe the pain that forces you to build it.<\/p>\n<h2>How My Harness Grew (Without Me Realizing What It Was)<\/h2>\n<p>I run a production AI system as a daily driver. Not a demo. Not a proof of concept. A system that manages infrastructure, writes code, deploys to servers, interacts with APIs, and handles real stakes across real projects. I co-founded <a href=\"https:\/\/aether-global.com\" target=\"_blank\" rel=\"noopener\">Aether Global Technology<\/a>, a Salesforce consulting partner in Manila. The system runs alongside that work.<\/p>\n<p>I never sat down and said &#8220;I&#8217;m going to build a harness.&#8221; I just kept getting burned, and kept adding rules so I wouldn&#8217;t get burned the same way twice. Looking back, every rule traces to a specific failure.<\/p>\n<p><strong>The anti-fabrication rules<\/strong> exist because the AI confidently stated a method existed in a file it hadn&#8217;t read. I spent 45 minutes debugging code that was never there. The fix wasn&#8217;t better prompting. It was a mechanical gate: before asserting any method name or file path, the system must verify via tool. No verification, no assertion. That&#8217;s a feedforward control, in Fowler&#8217;s language. I just called it &#8220;stop making things up.&#8221;<\/p>\n<p><strong>The deploy gate<\/strong> exists because the system nearly pushed Salesforce metadata to the wrong sandbox. 54 files, wrong org. The fix was a target allowlist per project, checked mechanically before any deploy command executes. A hard block, not a polite suggestion. (Sound familiar? <a href=\"\/ai-agent-production-safety\/\">An AI agent deleted a production database in 9 seconds<\/a> because nobody built one of these.)<\/p>\n<p><strong>The anti-drift rules<\/strong> exist because after multiple tool calls, the system&#8217;s mental model of a file diverges from the file&#8217;s actual state. It recalls values it read 20 minutes ago, not the values that exist now. The fix: re-read the source before emitting anything external-facing. Grep at write time, not recall time.<\/p>\n<p><strong>The citation requirement<\/strong> exists because the system generated a client proposal with a number it pulled from nowhere. In consulting, a wrong number in front of a client is a credibility hit you don&#8217;t recover from. The rule is simple now: every data claim needs a source. No source, mark it as unverified. No exceptions.<\/p>\n<p>None of these came from reading a framework. They came from things going wrong on a Tuesday afternoon.<\/p>\n<h2>What Fowler Gets Right<\/h2>\n<p>The dual-control model is real. You need both feedforward controls (rules that prevent bad behavior before it happens) and feedback controls (sensors that catch it after). Relying on just one creates blind spots.<\/p>\n<p>My system has 40+ feedforward hooks. They fire before tool calls, checking for unauthorized domains, verifying pre-task knowledge checks happened, blocking destructive git operations, enforcing deploy targets. The same problems I wrote about in <a href=\"\/autonomous-ai-agents-production-cost\/\">what autonomous agents actually cost in production<\/a>. That&#8217;s Fowler&#8217;s &#8220;guides&#8221; category.<\/p>\n<p>The feedback side is thinner. I have post-execution checks and monitoring, but the honest truth is that feedforward controls do most of the heavy lifting. Catching a bad action before it executes is cheaper than cleaning up after it runs.<\/p>\n<p>Fowler also nails the distinction between computational and inferential controls. My deploy gate is computational. It checks a JSON allowlist. Takes milliseconds. My anti-fabrication system is inferential. It relies on the model itself to flag uncertainty. That&#8217;s slower, less reliable, and more expensive. But it catches things no deterministic check can.<\/p>\n<h2>What the Frameworks Miss<\/h2>\n<p><strong>Harnesses are incident-driven, not architecture-driven.<\/strong> The literature treats harness engineering as a design discipline. It is, eventually. But every harness I&#8217;ve seen starts as a pile of duct tape applied after something broke. The elegance comes later.<\/p>\n<p><strong>Context survival is the real engineering problem.<\/strong> Nobody talks about this enough. AI agents operate in conversation windows. Those windows compress. When they compress, the agent forgets rules, loses project state, and starts making the same mistakes you fixed three hours ago. My harness has a dedicated recovery protocol: when context compresses, reload memory, re-read project state, verify the date (the agent doesn&#8217;t know what day it is after compression). That&#8217;s not in any of the frameworks. It should be.<\/p>\n<p><strong>The harness is the product, not the model.<\/strong> When people evaluate AI systems, they compare models. Claude vs. GPT vs. Gemini. That&#8217;s the wrong comparison. The model is interchangeable. I&#8217;ve run the same harness across model versions, and the harness determines output quality more than the model does. A disciplined harness on a weaker model beats an unconstrained stronger model every time.<\/p>\n<p><strong>Human checkpoints aren&#8217;t optional.<\/strong> Red Hat says &#8220;human review between planning and implementation.&#8221; That&#8217;s correct but undersells it. In my system, any task with three or more steps requires a plan review before execution. Single-step tasks state the intended action and wait. This isn&#8217;t a nice-to-have. It&#8217;s the difference between an AI agent that helps and one that creates work.<\/p>\n<h2>Same Summit, Different Trails<\/h2>\n<p>Here&#8217;s what I find encouraging about this whole thing.<\/p>\n<p>My first hook was mid-February 2026. By March, I&#8217;d codified the principle &#8220;mechanical enforcement over behavioral commitment&#8221; because telling the model not to do something stopped working the moment context compressed. By April, I had 30+ hooks, a memory layer that survives compression, and a pre-task gate system that forces verification before every edit.<\/p>\n<p>I built all of this without reading a single blog post about harness engineering. I built it because things kept breaking, and I was tired of fixing the same failures manually.<\/p>\n<p>OpenAI, Fowler, Red Hat, LangChain, Salesforce. They all arrived at the same architecture from the enterprise side. I arrived from the practitioner side. A guy in Manila running one AI system across 40+ projects, duct-taping rules onto it every time something went wrong.<\/p>\n<p>The fact that we converged tells you something important: <strong>this isn&#8217;t a framework you adopt. It&#8217;s a shape that production forces you into.<\/strong> If you&#8217;re running an AI agent on real work and you&#8217;ve started writing rules, blocking certain commands, requiring verification steps before deploys, you&#8217;re already doing harness engineering. You just didn&#8217;t know it had a name.<\/p>\n<p>The industry version is clean. Diagrams with boxes. Three regulation dimensions. Harness templates.<\/p>\n<p>The practitioner&#8217;s version is messier. A behavioral rules file that grew from 5 rules to 13 because the AI kept finding new ways to drift. A hook that blocks web searches because the AI was burning API calls on questions its own knowledge base could answer. A gate that forces the system to check what day it is before referencing time, because it hallucinated the date twice.<\/p>\n<p>Both versions work. Both are valid. The diagram didn&#8217;t exist when I needed a solution. The solution existed when the diagram caught up.<\/p>\n<p>If you&#8217;re building something like this and wondering whether you&#8217;re doing it right, check it against Fowler&#8217;s framework. If your scrappy infrastructure maps to their categories (guides, sensors, computational controls, inferential controls), you&#8217;re on the right track. The problems are universal. The solutions are convergent. And you don&#8217;t need permission from a blog post to keep building.<\/p>\n<h2>Frequently Asked Questions<\/h2>\n<details style=\"border-bottom: 1px solid #eee; padding: 16px 0; margin: 0;\">\n<summary style=\"cursor: pointer; font-weight: 600; font-family: Inter, sans-serif; font-size: 16px; color: #1a1a2e; list-style: none; display: flex; justify-content: space-between; align-items: center;\">What is harness engineering?<span style=\"color: #00BFA6; font-size: 20px; transition: transform 0.2s;\">+<\/span><\/summary>\n<div style=\"padding: 12px 0 4px 0; color: #444; font-size: 15px; line-height: 1.7;\">Harness engineering is the discipline of building the systems, constraints, and feedback loops that wrap around an AI agent to make it reliable. Martin Fowler defines it as: Agent = Model + Harness, where the harness is everything except the model itself. This includes rules, gates, memory systems, deploy protections, and human checkpoints.<\/div>\n<\/details>\n<details style=\"border-bottom: 1px solid #eee; padding: 16px 0; margin: 0;\">\n<summary style=\"cursor: pointer; font-weight: 600; font-family: Inter, sans-serif; font-size: 16px; color: #1a1a2e; list-style: none; display: flex; justify-content: space-between; align-items: center;\">How is harness engineering different from prompt engineering?<span style=\"color: #00BFA6; font-size: 20px; transition: transform 0.2s;\">+<\/span><\/summary>\n<div style=\"padding: 12px 0 4px 0; color: #444; font-size: 15px; line-height: 1.7;\">Prompt engineering focuses on crafting better inputs to the model. Harness engineering focuses on the infrastructure around the model: the rules that constrain its behavior, the gates that block bad outputs, and the memory systems that maintain context. Prompt engineering is one input to the harness. The harness is the whole system.<\/div>\n<\/details>\n<details style=\"border-bottom: 1px solid #eee; padding: 16px 0; margin: 0;\">\n<summary style=\"cursor: pointer; font-weight: 600; font-family: Inter, sans-serif; font-size: 16px; color: #1a1a2e; list-style: none; display: flex; justify-content: space-between; align-items: center;\">Do you need harness engineering for AI agents?<span style=\"color: #00BFA6; font-size: 20px; transition: transform 0.2s;\">+<\/span><\/summary>\n<div style=\"padding: 12px 0 4px 0; color: #444; font-size: 15px; line-height: 1.7;\">If your AI agent does anything with real consequences (deploying code, accessing APIs, modifying data), yes. Without a harness, you are trusting the model judgment on every action. Models are capable but not reliable. The harness is what makes an unreliable component into a reliable system.<\/div>\n<\/details>\n<details style=\"border-bottom: 1px solid #eee; padding: 16px 0; margin: 0;\">\n<summary style=\"cursor: pointer; font-weight: 600; font-family: Inter, sans-serif; font-size: 16px; color: #1a1a2e; list-style: none; display: flex; justify-content: space-between; align-items: center;\">What is the difference between harness engineering and context engineering?<span style=\"color: #00BFA6; font-size: 20px; transition: transform 0.2s;\">+<\/span><\/summary>\n<div style=\"padding: 12px 0 4px 0; color: #444; font-size: 15px; line-height: 1.7;\">Context engineering is about what information reaches the model and when. Harness engineering is broader. It includes context management but also covers execution constraints, deploy gates, anti-fabrication rules, human checkpoints, and feedback loops. Context engineering is a subset of harness engineering.<\/div>\n<\/details>\n","protected":false},"excerpt":{"rendered":"<p>In early February 2026, Mitchell Hashimoto (co-founder of HashiCorp) described his habit of engineering permanent fixes into an AI agent&#8217;s environment whenever it made a mistake. He called it &#8220;engineering the harness.&#8221; Days later, OpenAI formalized the concept in a blog post. Around the same time, without having read either, I wrote my first enforcement hook for a production AI system. Different continent, different scale, different context. Same problem. A few weeks later, Birgitta B&ouml;ckeler formalized it on Martin Fowler&#8217;s site. Red Hat published their version. LangChain. Salesforce. By April, the term was everywhere. I didn&#8217;t discover any of this until recently. I was too busy building the thing they were naming. That&#8217;s not a flex. It&#8217;s something more interesting. When engineers face the same constraints (unreliable model outputs, production stakes, context that evaporates), they converge on the same solutions. Different trails, same summit. And if your messy pile of rules and scripts looks suspiciously like what OpenAI and Fowler describe, that&#8217;s not coincidence. It&#8217;s validation. What Is Harness Engineering (And Why It Matters for AI Agents) Harness engineering is the discipline of building the constraints, gates, memory systems, and feedback loops that wrap around an AI agent to make it reliable in production. The core equation, from Martin Fowler&#8217;s team: Agent = Model + Harness. The harness is everything around the model that you actually control. If context engineering is about what reaches the model, harness engineering is about what constrains it after it responds. Red Hat puts it differently. &#8220;The AI writes better code when you design the environment it works in.&#8221; Their framing is about structured workflows. Templates. Impact maps. Acceptance criteria. Both are right. Neither is complete. They describe the architecture. They don&#8217;t describe the pain that forces you to build it. How My Harness Grew (Without Me Realizing What It Was) I run a production AI system as a daily driver. Not a demo. Not a proof of concept. A system that manages infrastructure, writes code, deploys to servers, interacts with APIs, and handles real stakes across real projects. I co-founded Aether Global Technology, a Salesforce consulting partner in Manila. The system runs alongside that work. I never sat down and said &#8220;I&#8217;m going to build a harness.&#8221; I just kept getting burned, and kept adding rules so I wouldn&#8217;t get burned the same way twice. Looking back, every rule traces to a specific failure. The anti-fabrication rules exist because the AI confidently stated a method existed in a file it hadn&#8217;t read. I spent 45 minutes debugging code that was never there. The fix wasn&#8217;t better prompting. It was a mechanical gate: before asserting any method name or file path, the system must verify via tool. No verification, no assertion. That&#8217;s a feedforward control, in Fowler&#8217;s language. I just called it &#8220;stop making things up.&#8221; The deploy gate exists because the system nearly pushed Salesforce metadata to the wrong sandbox. 54 files, wrong org. The fix was a target allowlist per project, checked mechanically before any deploy command executes. A hard block, not a polite suggestion. (Sound familiar? An AI agent deleted a production database in 9 seconds because nobody built one of these.) The anti-drift rules exist because after multiple tool calls, the system&#8217;s mental model of a file diverges from the file&#8217;s actual state. It recalls values it read 20 minutes ago, not the values that exist now. The fix: re-read the source before emitting anything external-facing. Grep at write time, not recall time. The citation requirement exists because the system generated a client proposal with a number it pulled from nowhere. In consulting, a wrong number in front of a client is a credibility hit you don&#8217;t recover from. The rule is simple now: every data claim needs a source. No source, mark it as unverified. No exceptions. None of these came from reading a framework. They came from things going wrong on a Tuesday afternoon. What Fowler Gets Right The dual-control model is real. You need both feedforward controls (rules that prevent bad behavior before it happens) and feedback controls (sensors that catch it after). Relying on just one creates blind spots. My system has 40+ feedforward hooks. They fire before tool calls, checking for unauthorized domains, verifying pre-task knowledge checks happened, blocking destructive git operations, enforcing deploy targets. The same problems I wrote about in what autonomous agents actually cost in production. That&#8217;s Fowler&#8217;s &#8220;guides&#8221; category. The feedback side is thinner. I have post-execution checks and monitoring, but the honest truth is that feedforward controls do most of the heavy lifting. Catching a bad action before it executes is cheaper than cleaning up after it runs. Fowler also nails the distinction between computational and inferential controls. My deploy gate is computational. It checks a JSON allowlist. Takes milliseconds. My anti-fabrication system is inferential. It relies on the model itself to flag uncertainty. That&#8217;s slower, less reliable, and more expensive. But it catches things no deterministic check can. What the Frameworks Miss Harnesses are incident-driven, not architecture-driven. The literature treats harness engineering as a design discipline. It is, eventually. But every harness I&#8217;ve seen starts as a pile of duct tape applied after something broke. The elegance comes later. Context survival is the real engineering problem. Nobody talks about this enough. AI agents operate in conversation windows. Those windows compress. When they compress, the agent forgets rules, loses project state, and starts making the same mistakes you fixed three hours ago. My harness has a dedicated recovery protocol: when context compresses, reload memory, re-read project state, verify the date (the agent doesn&#8217;t know what day it is after compression). That&#8217;s not in any of the frameworks. It should be. The harness is the product, not the model. When people evaluate AI systems, they compare models. Claude vs. GPT vs. Gemini. That&#8217;s the wrong comparison. The model is interchangeable. I&#8217;ve run the same harness across model versions, and the harness determines output quality more than the model does. A disciplined<\/p>\n","protected":false},"author":1,"featured_media":211,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[13],"tags":[],"class_list":["post-209","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-insights"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.2 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>What Is Harness Engineering? A Practitioner&#039;s Take<\/title>\n<meta name=\"description\" content=\"Harness engineering is the discipline of constraining AI agents for production reliability. I built one over 200+ sessions before the term existed. Here is what the frameworks miss.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/tokita.online\/what-is-harness-engineering\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"What Is Harness Engineering? A Practitioner&#039;s Take\" \/>\n<meta property=\"og:description\" content=\"Harness engineering is the discipline of constraining AI agents for production reliability. I built one over 200+ sessions before the term existed. Here is what the frameworks miss.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/tokita.online\/what-is-harness-engineering\/\" \/>\n<meta property=\"og:site_name\" content=\"Tom Tokita\" \/>\n<meta property=\"article:published_time\" content=\"2026-05-05T08:35:56+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2026-05-05T08:55:21+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/tokita.online\/wp-content\/uploads\/2026\/05\/featured-harness-engineering.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1024\" \/>\n\t<meta property=\"og:image:height\" content=\"1024\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Tom Tokita\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Tom Tokita\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"8 minutes\" \/>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"What Is Harness Engineering? A Practitioner's Take","description":"Harness engineering is the discipline of constraining AI agents for production reliability. I built one over 200+ sessions before the term existed. Here is what the frameworks miss.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/tokita.online\/what-is-harness-engineering\/","og_locale":"en_US","og_type":"article","og_title":"What Is Harness Engineering? A Practitioner's Take","og_description":"Harness engineering is the discipline of constraining AI agents for production reliability. I built one over 200+ sessions before the term existed. Here is what the frameworks miss.","og_url":"https:\/\/tokita.online\/what-is-harness-engineering\/","og_site_name":"Tom Tokita","article_published_time":"2026-05-05T08:35:56+00:00","article_modified_time":"2026-05-05T08:55:21+00:00","og_image":[{"width":1024,"height":1024,"url":"https:\/\/tokita.online\/wp-content\/uploads\/2026\/05\/featured-harness-engineering.jpg","type":"image\/jpeg"}],"author":"Tom Tokita","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Tom Tokita","Est. reading time":"8 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/tokita.online\/what-is-harness-engineering\/#article","isPartOf":{"@id":"https:\/\/tokita.online\/what-is-harness-engineering\/"},"author":{"name":"Tom Tokita","@id":"https:\/\/tokita.online\/#\/schema\/person\/b420ed074b20ee6cb7a1f0f11c8dacdd"},"headline":"I Didn&#8217;t Know I Was Doing Harness Engineering","datePublished":"2026-05-05T08:35:56+00:00","dateModified":"2026-05-05T08:55:21+00:00","mainEntityOfPage":{"@id":"https:\/\/tokita.online\/what-is-harness-engineering\/"},"wordCount":1696,"publisher":{"@id":"https:\/\/tokita.online\/#organization"},"image":{"@id":"https:\/\/tokita.online\/what-is-harness-engineering\/#primaryimage"},"thumbnailUrl":"https:\/\/tokita.online\/wp-content\/uploads\/2026\/05\/featured-harness-engineering.jpg","articleSection":["Insights"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/tokita.online\/what-is-harness-engineering\/","url":"https:\/\/tokita.online\/what-is-harness-engineering\/","name":"What Is Harness Engineering? A Practitioner's Take","isPartOf":{"@id":"https:\/\/tokita.online\/#website"},"primaryImageOfPage":{"@id":"https:\/\/tokita.online\/what-is-harness-engineering\/#primaryimage"},"image":{"@id":"https:\/\/tokita.online\/what-is-harness-engineering\/#primaryimage"},"thumbnailUrl":"https:\/\/tokita.online\/wp-content\/uploads\/2026\/05\/featured-harness-engineering.jpg","datePublished":"2026-05-05T08:35:56+00:00","dateModified":"2026-05-05T08:55:21+00:00","description":"Harness engineering is the discipline of constraining AI agents for production reliability. I built one over 200+ sessions before the term existed. Here is what the frameworks miss.","breadcrumb":{"@id":"https:\/\/tokita.online\/what-is-harness-engineering\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/tokita.online\/what-is-harness-engineering\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/tokita.online\/what-is-harness-engineering\/#primaryimage","url":"https:\/\/tokita.online\/wp-content\/uploads\/2026\/05\/featured-harness-engineering.jpg","contentUrl":"https:\/\/tokita.online\/wp-content\/uploads\/2026\/05\/featured-harness-engineering.jpg","width":1024,"height":1024},{"@type":"BreadcrumbList","@id":"https:\/\/tokita.online\/what-is-harness-engineering\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/tokita.online\/"},{"@type":"ListItem","position":2,"name":"I Didn&#8217;t Know I Was Doing Harness Engineering"}]},{"@type":"WebSite","@id":"https:\/\/tokita.online\/#website","url":"https:\/\/tokita.online\/","name":"Tom Tokita","description":"","publisher":{"@id":"https:\/\/tokita.online\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/tokita.online\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/tokita.online\/#organization","name":"Tom Tokita","url":"https:\/\/tokita.online\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/tokita.online\/#\/schema\/logo\/image\/","url":"https:\/\/tokita.online\/wp-content\/uploads\/2026\/03\/tokita-logo-clear-cropped.webp","contentUrl":"https:\/\/tokita.online\/wp-content\/uploads\/2026\/03\/tokita-logo-clear-cropped.webp","width":474,"height":151,"caption":"Tom Tokita"},"image":{"@id":"https:\/\/tokita.online\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/tokita.online\/#\/schema\/person\/b420ed074b20ee6cb7a1f0f11c8dacdd","name":"Tom Tokita","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/1be5e8ad1bd8baf1b5103aa27f1190be4ad3ede9953719e4c3540813988094aa?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/1be5e8ad1bd8baf1b5103aa27f1190be4ad3ede9953719e4c3540813988094aa?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/1be5e8ad1bd8baf1b5103aa27f1190be4ad3ede9953719e4c3540813988094aa?s=96&d=mm&r=g","caption":"Tom Tokita"},"sameAs":["https:\/\/tokita.online"],"url":"https:\/\/tokita.online\/author\/t-tokitajr\/"}]}},"_links":{"self":[{"href":"https:\/\/tokita.online\/?rest_route=\/wp\/v2\/posts\/209","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/tokita.online\/?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/tokita.online\/?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/tokita.online\/?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/tokita.online\/?rest_route=%2Fwp%2Fv2%2Fcomments&post=209"}],"version-history":[{"count":5,"href":"https:\/\/tokita.online\/?rest_route=\/wp\/v2\/posts\/209\/revisions"}],"predecessor-version":[{"id":215,"href":"https:\/\/tokita.online\/?rest_route=\/wp\/v2\/posts\/209\/revisions\/215"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/tokita.online\/?rest_route=\/wp\/v2\/media\/211"}],"wp:attachment":[{"href":"https:\/\/tokita.online\/?rest_route=%2Fwp%2Fv2%2Fmedia&parent=209"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/tokita.online\/?rest_route=%2Fwp%2Fv2%2Fcategories&post=209"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/tokita.online\/?rest_route=%2Fwp%2Fv2%2Ftags&post=209"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}