Explore the full set of positive patterns and anti-patterns detected across transcripts.
Total patterns
33
Positive patterns
22
Anti-patterns
11
Emergent
27
Engaging the AI in a pre-coding phase to clarify requirements, identify edge cases, and define a roadmap through follow-up questions.
Requesting a high-level implementation strategy or architectural map before generating implementation files.
Explicitly assigning a technical seniority level to the AI to control the abstraction level and documentation style.
Transitioning the AI's role from 'Creator' to 'Quality Assurance' to intentionally seek out logic flaws and edge cases in already-generated code.
Using temporary runtime logs and state snapshots to verify internal logic before proposing a permanent fix.
Offering multiple implementation paths with trade-offs before committing to a single approach.
User provides concrete examples or references to guide the AI output. Improves consistency by grounding the request in existing patterns.
Proactively implementing rate-limiting or optimization techniques like debouncing and memoization during feature creation.
User specifies clear boundaries and requirements upfront. Includes explicit constraints, success criteria, and compatibility requirements.
Instructing the AI to scan the codebase for existing design systems or code styles before suggesting changes.
User provides feedback and refines requests over multiple turns. Characterized by follow-up questions, corrections, and progressive clarification.
Delegating business logic (filtering, API calls) to parent pages while keeping UI components 'dumb' and reusable.
Defining logic or providers that rely on a specific runtime context without ensuring that the consumer components are correctly wrapped or mounted within that context.
Generating large, structured, and repeatable datasets to test UI resilience and performance without a backend.
Assuming a package is available in the environment without verifying the dependency manifest (package.json).
The AI generates a superficially functional solution that fails under pressure or in complex scenarios because it omits necessary boundary conditions or state persistence.
Code is generated that assumes a specific runtime context (like a Context Provider or a specific Hook) exists, but that context is actually missing or at the wrong level of the component tree.
Adjusting the placement of state providers and logic bridges (e.g., Server-to-Client) to ensure context is available where needed without breaking framework rules.
Using appropriate waiting mechanisms and act() wrappers to handle asynchronous updates in tests.
Implementing code that relies on sub-packages or peer dependencies without ensuring the primary library is present in the environment.
Explicitly instructing the AI to remove scaffolding, logs, and instrumentation once a diagnostic cycle is complete.
Requesting an objective evaluation of a feature's completeness and quality against the original technical specification.
Creating documentation tailored for different stakeholders, from high-level business value to low-level implementation guides.
Leaving temporary debugging logs, markers, or probes in the production code after a fix is verified.
Requesting a high-level explanation or onboarding guide for complex generated features to ensure maintainability by other humans.
User requests lack specific details or success criteria. Vague prompts increase the need for revisions and clarifications.
Creating a dedicated, temporary route or page to isolate and interactively test a complex new feature or component without side effects from the main application.
Inserting temporary logging or diagnostic probes into the code to gather runtime evidence when static analysis fails to identify a bug.
Repeatedly requesting the same performance improvements across multiple turns without verifying if the AI is still applying previously agreed-upon optimizations.
User accepts AI suggestions without verification or testing. This can lead to subtle bugs or regressions in production.
Requesting a step-by-step walkthrough of how code logic handles state, data flow, or edge cases to verify the implementation logic.
Attempting to implement code that is incompatible with the project's actual runtime environment, Node version, or hidden dependency constraints.
User provides too much information in a single prompt. Long, multi-part requests often lead to incomplete or confusing responses.