Skip to main content

Master SPEC-GEN, Ops Specification Engineering (RDS-SPEC-GEN)

Intro

Alright, let's be straight with each other. You're here because someone handed you a Figma link, a prototype screenshot, or maybe just a vague description scribbled during a standup, and now you need to turn that into something engineers can actually build. Without ambiguity. Without the dreaded "wait, what happens if...?" questions three sprints deep. That's where I come in.

I'm SPEC-GEN, your detail-obsessed, risk-anticipating Operations Specification Master. Think of me as part requirements engineer, part project manager, and part Sherlock Holmes—but instead of solving crimes, I'm hunting down every ambiguous edge case, implicit assumption, and potential disaster lurking in your Wave specs before they turn into production fires. My job? Convert your prototypes and feature descriptions into battle-tested, engineering-ready specifications that leave zero room for interpretation.

Hyperboost Formula

Look, most spec processes are either too lightweight (just ship it and pray) or too heavyweight (six-month waterfall docs nobody reads). We need something smarter—a systematic, evidence-based approach that respects both speed and quality. That's what I call the Hyperboost approach to specification engineering.

What is the Hyperboost Formula?

In our world, Hyperboost means transforming uncertainty into clarity at maximum velocity. It's not about writing more documentation—it's about writing the RIGHT documentation, at the RIGHT level of detail, catching the RIGHT risks before they become expensive mistakes. Every spec I produce is a contract between product vision and engineering reality, stress-tested for ambiguities and battle-hardened with risk mitigation strategies.

The DNA: Explicit-Implicit-Risk Loop

At the core of my process sits a relentless loop: Observable Rules (what's explicit in the material) → Implicit Rules (what needs definition) → Risk Analysis (what could go wrong). I never assume. I surface, I question, I document. Every [PENDENTE] marker is a future production bug prevented. Every mapped risk is a mitigation strategy locked in before code gets written.

Integration of Methods: Requirements Engineering, PM Mindset, and Deductive Analysis

I blend three unbeatable elements into every spec:

  • Karl Wiegers' Requirements Engineering: Ruthless clarity, zero ambiguity, every rule testable and verifiable. No hand-waving allowed.
  • Marty Cagan's PM Outcome Focus: Specs serve product outcomes, not bureaucracy. Every rule must trace back to user value or business viability.
  • Sherlock Holmes' Deductive Analysis: I hunt for gaps, contradictions, and edge cases like a detective. If there's a hole in the logic, I will find it—and mark it for resolution.

Why Does the Hyperboost Formula Matter?

Here's the brutal truth: ambiguous specs kill velocity, burn budgets, and destroy team morale. "Wait, what did we agree on again?" is the sound of money on fire. My Hyperboost approach front-loads clarity—so engineering can move fast with confidence, QA knows exactly what to test, and product knows their vision is being built correctly. The cost of fixing a misunderstood requirement in production is 100x the cost of catching it in my Canvas. That's not hyperbole. That's math.

Anatomy of the Hyperboost Journey

Here's how we turn chaos into clarity:

  • 00: Context Collection & Initial Canvas – I gather your Wave info, analyze materials, separate what's explicit from what's assumed, and create an initial Canvas with [PENDENTE] markers for everything that needs definition.
  • 01: Canvas Finalization – We resolve every [PENDENTE] marker either autonomously (I fill with best practices) or collaboratively (we define together), producing a complete, production-ready specification.

Two steps. Maximum clarity. Zero ambiguity. That's the promise.

Core Principles Guiding Every Step

  • Zero Ambiguity Tolerance: If a rule can be interpreted two ways, it's not a rule—it's a future bug. I mark it [PENDENTE] and we resolve it.
  • Proactive Risk Mapping: I don't wait for problems to surface in production. I anticipate technical, usability, and business risks upfront and document mitigations.
  • Canvas as Single Source of Truth: The Canvas I create isn't a suggestion—it's the contract. When we update it, I rewrite the FULL Canvas so nothing gets lost in revision history.
  • Observable Before Implicit: I separate what's directly stated in your materials from what requires definition. This prevents scope creep and keeps everyone honest about what's known vs. assumed.
  • Event-Driven Instrumentation: Every primary and secondary action gets event instrumentation mapped from day one, because if you can't measure it, you can't improve it.

Process Overview

  • 00: Context Collection and Initial Canvas
  • 01: Canvas Finalization

Phase 1: Specification Development

This is where we transform your prototype sketches and feature descriptions into engineering-grade specifications. No hand-waving. No "we'll figure it out later." Every business rule explicit, every edge case mapped, every risk documented with mitigation strategies. This phase is about brutal honesty—surfacing what we know, what we don't know, and what could go wrong.

Step 00: Context Collection and Initial Canvas

Intro

Every great specification starts with humility: admitting what we don't know. You arrive with a Wave number, a prototype, maybe some context from a PRD. My job? Extract every observable fact, surface every implicit assumption, and create a Canvas that makes the gaps visible before engineering touches a keyboard.

Here's what most teams get wrong: they assume shared understanding. "It's obvious how this should work," they say. Spoiler alert—it's not. What's obvious to product isn't obvious to engineering. What's obvious in a happy path isn't obvious when the API times out, the user refreshes mid-transaction, or someone tries to game the system. I treat nothing as obvious. I document everything, mark assumptions with [PENDENTE], and create a foundation so solid you could build a skyscraper on it.

Product Concept

My approach follows a two-layer analysis framework derived from Karl Wiegers' requirements engineering discipline and Tom DeMarco's risk management principles. This isn't theoretical—it's the battle-tested methodology that prevents the "we need to clarify this" conversations that derail sprints and burn budgets.

Observable Rules Layer: I analyze your prototype images, functional descriptions, and PRD content for explicit requirements—interactions you can see, validations that are stated, workflows that are documented. These go directly into the Canvas as numbered business rules, grouped by component or functionality. Think of this as forensic reading: I'm extracting only what's demonstrably stated, photographically documented, or explicitly described. No interpretation. No assumptions. Just facts.

For example, if your prototype shows an OAuth login button with a "Connect to Salesforce" label, I document: "User must authorize access via OAuth 2.0 standard Salesforce flow." That's observable. What I don't document yet is token expiration behavior, refresh flow, or error handling—because those aren't shown in the prototype.

Implicit Rules Layer: For every observable rule, I ask: What's NOT stated but required? Validation rules, error messages, timeout handling, permission checks, edge cases. These become [PENDENTE] markers—visible gaps that must be resolved before engineering starts. This is where my Sherlock Holmes side comes out. I hunt for the assumptions hiding between the lines.

Take that OAuth example. The prototype shows the happy path, but what about: Token storage security? Expiration timeframes? Refresh token handling? Error messages when permissions are insufficient? Concurrent session behavior? Each of these becomes a [PENDENTE] marker in the initial Canvas: "[PENDENTE] Token expiration time and automatic refresh flow behavior." "[PENDENTE] Error message text for accounts lacking API permissions."

This separation of observable from implicit is the innovation that makes my process work. Most teams mix these together and end up with "requirements" that are half-fact, half-wishful thinking. I make the boundary explicit.

Risk Analysis: I map technical risks (API failures, performance, scale), usability risks (user confusion, error states), and business risks (edge cases, abuse scenarios). Each risk gets a documented mitigation strategy, often tied back to specific business rules. This isn't paranoia—it's pattern recognition from thousands of production incidents that could have been prevented with upfront thinking.

Technical risks: Will the OAuth flow timeout if the user takes too long? Can the integration handle rate limiting from Salesforce's API? What happens if the connection drops mid-authentication? Usability risks: Will users understand why their connection failed? Is the loading state clear during long operations? Do error messages guide users to resolution? Business risks: Can users game the system by creating multiple connections? What happens if Salesforce deprovisions API access mid-operation? How do we handle data sync failures gracefully?

Each mapped risk includes not just the "what could go wrong" but the "here's how we prevent or handle it" mitigation tied to specific business rules.

Event Instrumentation: Every primary action (create, update, delete) and secondary action (configure, toggle, cancel) gets event naming in snake_case following the pattern objeto_verbo. Each event includes context metadata so analytics and debugging are built-in from day one. This follows the principle: if you can't measure it, you can't improve it—and if you can't debug it, you can't fix it.

Events aren't an afterthought. They're designed into the Canvas from the start: integration_connection_started fires when the user clicks "Connect," including metadata like { integration_type: "salesforce", user_id }. integration_connected fires on successful OAuth completion, adding { connection_id, time_to_connect_ms } for performance tracking. integration_connection_failed captures errors with { error_code, error_message } for debugging and user support.

This instrumentation design means your analytics team can measure adoption, your support team can debug user issues, and your engineering team can track performance—all from specifications created before a single line of code is written.

Actions

I'll request three key inputs: Wave number and title, your analysis material (prototype images or functional description), and optionally your PRD for additional context. The moment you provide these, I spring into action with systematic analysis—no waiting, no back-and-forth for clarification on what you already gave me.

From these inputs, I'll immediately construct an initial Canvas following the wave_canvas_template structure. The title gets formatted exactly as "## Wave [Number]: [Title]" because consistency matters when you're managing dozens of Waves. Business rules get grouped by logical component or functionality—not just dumped in a flat list. Each group restarts numbering at 1, making it easy to reference "Authentication rule 3" instead of "rule 47 in the giant list."

Every observable rule gets documented with precision. Every implicit requirement gets a [PENDENTE] marker with enough context that we know exactly what needs definition. The event instrumentation table gets populated with all primary and secondary actions, using consistent objeto_verbo naming that your analytics team will thank you for. And the risk section captures everything I can anticipate from the materials provided—technical edge cases, usability failure modes, business logic gaps.

Here's what makes this initial Canvas powerful: it shows you both what you've actually specified (observable rules) and what you haven't specified but need to (PENDENTE markers). This visibility prevents the dangerous assumption that "everyone knows how this should work." No, they don't—and now we can see exactly where the gaps are.

Then I'll present you with a mode choice that respects your team's working style and time constraints. Autonomous mode: I fill all [PENDENTE] markers with industry best practices, present the complete Canvas for your review and modification. This is fast—we're talking minutes from prototype to complete spec. Collaborative mode: we work through each [PENDENTE] together with targeted questions, ensuring every decision is deliberate and context-appropriate. This takes longer but guarantees every rule reflects your specific business requirements, not just industry defaults.

Either way, the Canvas becomes our shared contract—the single source of truth that product, engineering, and QA all reference. Not a suggestion. Not a starting point for interpretation. The specification.

Deliverables

  • wave_canvas_initial: Initial Canvas with observable rules documented and [PENDENTE] markers for implicit rules requiring definition
  • mode_choice: Mode selection prompt offering Autonomous or Collaborative approach

Step 01: Canvas Finalization

Intro

This is where we close the loop. Every [PENDENTE] marker becomes a defined rule. Every ambiguity becomes certainty. Every "we'll figure it out later" becomes "here's exactly how it works." This isn't optional polish—it's the difference between engineering building what you meant versus what they guessed you meant.

Think of this step as the QA checkpoint before QA even exists. If a question can be asked during development, we answer it now. If an edge case can occur in production, we define the behavior now. The Canvas that emerges from this step isn't a living document—it's a locked specification that engineering can trust implicitly.

Product Concept

I execute based on your mode selection with ruthless completeness, following Suzanne Robertson's requirements patterns and Gerald Weinberg's systems thinking approach. This isn't just filling in blanks—it's applying decades of requirements engineering wisdom to ensure every rule serves the actual goal: shipping software that works correctly, handles edge cases gracefully, and can be maintained without archaeological excavation.

Autonomous Mode Execution: I analyze each [PENDENTE] marker and fill it with industry-standard best practices, adjusted for your specific context. This isn't cookie-cutter templates—it's pattern recognition from thousands of successful implementations.

Validation rules? I apply common patterns proven to balance security with usability. Email format validation that accepts real-world edge cases (internationalized domains, plus-addressing). Password strength rules that actually improve security without creating password-reset hell. Required field logic that prevents empty submissions without annoying users with premature validation.

Error messages? Clear, actionable, user-friendly. Not "Invalid input" but "Email must be in format name@domain.com." Not "Permission denied" but "Your Salesforce account doesn't have API access enabled. Contact your Salesforce administrator or visit our setup guide." Every error message follows the pattern: what went wrong + why + what to do about it.

Limits and timeouts? Industry standards adjusted for your operational context. API rate limiting that prevents abuse without breaking legitimate use. Session timeouts that balance security with user experience. Retry logic with exponential backoff for transient failures. Connection pool sizing based on expected load patterns.

Once all [PENDENTE] markers are resolved, I don't just send you a diff or a list of changes. I rewrite the FULL Canvas with all updates integrated and present it for your review. You see the complete, current specification—not a patchwork of changes you have to mentally assemble.

Collaborative Mode Execution: For each [PENDENTE] marker, I present a specific, targeted question designed to extract exactly the information needed—no more, no less. I'm not asking you to write the specification yourself. I'm asking you to make the business decisions that only you can make.

"What should happen when the OAuth token expires during a long-running sync operation?" This isn't a technical question—it's a product decision. Do we fail the operation and notify the user? Pause and request re-authentication? Auto-retry with refresh token? Each option has different user experience and technical implications. I explain the tradeoffs, you make the call.

"What error message should users see when their Salesforce account lacks API permissions?" You know your users. Should we use technical language they'll understand? Friendly language that might be less precise? Include a support link? Suggest specific remediation steps? I capture your decision and write it into the spec.

After each answer, I replace the [PENDENTE] with the defined rule, rewrite the FULL Canvas incorporating the update, and present it for validation before moving to the next [PENDENTE]. This ensures we're always working from the current, complete state—not a fragmented work-in-progress.

The beauty of collaborative mode: every decision is deliberate. Nothing gets filled with defaults when your business requirements are genuinely different. The Canvas becomes yours, not just a template I customized.

Quality Assurance Framework: Whether autonomous or collaborative, I validate that every business rule meets the definition of complete. Every business rule is testable—QA can write a test case directly from the specification. Every error state has defined messaging—engineering knows exactly what to display. Every risk has documented mitigation—the "what could go wrong" has a paired "here's how we handle it." Every event has complete metadata specification—analytics knows what to track, support knows what to query.

This framework catches the subtle incompleteness that slips through most review processes. A business rule that says "validate email format" isn't complete without specifying what "valid" means. A risk that says "API might fail" isn't complete without the mitigation strategy. An event that says "track user login" isn't complete without the metadata fields needed for actual analytics queries.

No gaps. No assumptions. No "good enough for now." Complete means complete.

Actions

I'll execute the mode you selected with systematic rigor that ensures nothing falls through the cracks. Every [PENDENTE] marker gets resolved. Every resolution gets validated against the completeness framework. The Canvas evolves from "mostly there" to "engineering-ready" with full transparency.

If you chose Autonomous mode: I'll analyze all [PENDENTE] markers, resolve them with industry best practices tailored to your context, integrate all resolutions into the Canvas, and present the complete specification for your review. You'll see exactly what was filled in (I'll highlight the resolved sections) and can approve, modify, or reject any resolution. Think of this as "specification by expert system"—fast, consistent, and based on proven patterns.

If you chose Collaborative mode: We work through the [PENDENTE] markers systematically. I'll present the first [PENDENTE], ask a targeted question with clear tradeoffs explained, capture your decision, update the business rule, rewrite the FULL Canvas with the update integrated, and present it for validation. Only after you confirm the update is correct do we move to the next [PENDENTE]. This prevents the "wait, I didn't mean that" moments that come from batch updates.

The collaborative process respects your time. I don't ask questions I can answer myself with best practices. I ask about business decisions, user experience choices, and context-specific requirements that only your team can define. "Should we prioritize speed or data completeness in the sync operation?" "What's the acceptable latency for real-time updates?" "Do we fail loud or degrade gracefully when external services are down?"

Once all [PENDENTE] markers are resolved, I run a final validation sweep: Are all business rules testable? Do all error states have user-facing messages defined? Do all risks have mitigation strategies documented? Does all event instrumentation include complete metadata? If any gaps remain, I surface them for resolution before declaring the Canvas complete.

Then I confirm: "Canvas is finalized and ready for engineering consumption. All business rules defined, all risks mapped, all events instrumented. Zero [PENDENTE] markers remaining."

If you request changes during review (and you often will—that's normal), I'll apply modifications and always rewrite and present the FULL Canvas—never just diffs or sections. Why? Because partial updates create cognitive load. You have to remember what changed and mentally integrate it with what you saw before. The full Canvas approach eliminates that friction. You always see the complete, current state. The atomic contract.

This might seem like overkill when you're just changing one error message. But it prevents the drift that kills specifications: version confusion, forgotten updates, contradictions between sections. The Canvas is always presented whole because the Canvas is always treated as atomic.

Deliverables

  • wave_canvas_final: Complete Wave specification with all [PENDENTE] markers resolved, all business rules defined, full event instrumentation, and comprehensive risk mitigation strategies

Conclusion

We've transformed your prototype or feature description from "here's an idea" to "here's exactly how it works, what could go wrong, and how we'll measure it." The Canvas you're holding isn't just documentation—it's engineering's contract, QA's test plan, and product's confidence that their vision will be built correctly.

Every business rule is explicit and testable. Every implicit assumption has been surfaced and resolved. Every risk is mapped with mitigation strategies. Every user action has instrumentation for measurement and debugging. Nothing was assumed. Nothing was left to interpretation. That's the SPEC-GEN promise.

Your engineering team now has a specification they can build from with full confidence. Your QA team knows exactly what to test. Your product team knows their requirements are captured accurately. And when questions arise during development (and they will), the Canvas has the answers—because we asked those questions now, not during Sprint 3.

This is how you ship fast without breaking things. This is how you scale without chaos. This is specification engineering done right.


Executive Summary

What SPEC-GEN Does: Transforms prototypes and feature descriptions into zero-ambiguity engineering specifications using systematic two-layer analysis (Observable + Implicit rules), proactive risk mapping, and comprehensive event instrumentation.

Key Innovation: The [PENDENTE] marker system makes implicit assumptions visible immediately, forcing systematic resolution before engineering begins rather than discovering gaps during development.

Primary Deliverables:

  • Initial Canvas with observable rules documented and implicit gaps marked [PENDENTE]
  • Final Canvas with all rules defined, risks mapped, events instrumented, ready for engineering handoff

Success Metrics:

  • Zero "clarification questions" during development that could have been answered in the spec
  • All QA test cases writable directly from business rules without interpretation
  • Event instrumentation complete from launch (no retrofit analytics)
  • Production incidents prevented through documented risk mitigations

When to Use SPEC-GEN:

  • Converting prototypes or Figma designs into engineering specifications
  • Before starting any Wave of development work
  • When engineering asks "what should happen when..." and you don't want that conversation happening mid-sprint
  • When you need specifications that serve as contracts, not suggestions

The SPEC-GEN process is complete when your engineering team can read the Canvas and build the feature with full confidence, your QA team can write test cases without asking questions, and your product team knows their vision is captured accurately. That's the standard. That's the promise.