Skip to main content

Vibe Coding Exercises

Exercise Duration: 4-6 hours total (can be completed over multiple sessions)
Prerequisites: Basic programming knowledge, AI coding assistant access (Cursor, GitHub Copilot, etc.)
Learning Path: Part of Develop Phase → Vibe Coding → Practical Application

Exercise Overview

These hands-on exercises will teach you to effectively collaborate with AI coding assistants to accelerate development while maintaining code quality. You'll learn prompting techniques, AI-assisted debugging, and quality assurance practices.

Learning Objectives

By completing these exercises, you will be able to:

Master AI Prompting - Write effective prompts for code generation and debugging
Accelerate Development - Use AI to quickly scaffold features and components
Maintain Quality - Ensure AI-generated code meets quality standards
Debug Efficiently - Leverage AI for rapid issue diagnosis and resolution
Optimize Workflows - Integrate AI seamlessly into your development process

Exercise Setup

Prerequisites

Before starting, ensure you have:

# Create a new project directory
mkdir vibe-coding-exercises
cd vibe-coding-exercises

# Initialize project
npm init -y

# Install development dependencies
npm install -D typescript @types/node jest @types/jest ts-jest
npm install express cors helmet morgan

# Create TypeScript configuration
npx tsc --init

# Create basic project structure
mkdir src tests docs
touch src/index.ts tests/example.test.ts

AI Assistant Configuration

Configure your AI assistant for optimal performance:

// .vscode/settings.json (for Cursor/VS Code)
{
"cursor.ai.enableAutoComplete": true,
"cursor.ai.enableInlineChat": true,
"cursor.ai.model": "claude-3.5-sonnet",
"cursor.ai.customInstructions": "Focus on TypeScript, follow SOLID principles, include error handling, write tests, and provide clear documentation."
}

Exercise 1: API Development with AI Assistance

Duration: 90 minutes
Objective: Build a REST API using AI assistance for rapid development

Step 1: Planning and Prompting (15 minutes)

Start with a clear project description:

Your Prompt:

I need to build a Task Management API with the following requirements:

FUNCTIONAL REQUIREMENTS:
- Users can create, read, update, and delete tasks
- Tasks have: id, title, description, status (pending/in-progress/completed), priority (low/medium/high), due_date, created_at, updated_at
- Users can filter tasks by status and priority
- Users can search tasks by title and description
- Input validation and error handling

TECHNICAL REQUIREMENTS:
- Node.js with Express and TypeScript
- In-memory data store (no database for this exercise)
- RESTful API design
- Comprehensive error handling
- Input validation
- Unit tests for all endpoints
- API documentation

Please help me create the project structure and initial implementation.

Expected AI Response Pattern: The AI should provide a comprehensive project structure and begin implementation. Look for:

  • Proper TypeScript interfaces
  • RESTful endpoint design
  • Error handling patterns
  • Validation logic

Step 2: AI-Assisted Implementation (45 minutes)

Task Interface and Types

Your Prompt:

Create comprehensive TypeScript interfaces and types for the Task Management API. Include:
1. Task interface with all properties
2. CreateTaskRequest and UpdateTaskRequest types
3. API response types
4. Error types
5. Enums for status and priority

Review AI Output:

  • Verify interfaces are comprehensive
  • Check for proper TypeScript practices
  • Ensure validation-friendly structure

Express Server Setup

Your Prompt:

Create the Express server setup with:
1. TypeScript configuration
2. Middleware for CORS, helmet, morgan, express.json
3. Error handling middleware
4. Route structure for /api/tasks
5. Proper TypeScript types throughout
6. Environment configuration support

Quality Check:

  • Middleware properly configured
  • Error handling comprehensive
  • TypeScript types consistent
  • Environment variables handled

Task Service Implementation

Your Prompt:

Implement a TaskService class that manages tasks in memory with:
1. CRUD operations (create, read, update, delete)
2. Filtering by status and priority
3. Text search in title and description
4. Proper error handling for not found cases
5. Input validation
6. TypeScript types throughout

Review Checklist:

  • All CRUD operations implemented
  • Filtering and search work correctly
  • Error handling for edge cases
  • Input validation comprehensive
  • TypeScript types consistent

Step 3: AI-Assisted Testing (20 minutes)

Your Prompt:

Create comprehensive unit tests for the TaskService using Jest. Include:
1. Tests for all CRUD operations
2. Tests for filtering and search functionality
3. Error case testing (not found, validation errors)
4. Edge case testing (empty inputs, invalid IDs)
5. Mock data setup and teardown
6. Clear test descriptions and structure

Testing Quality Review:

  • Test coverage comprehensive
  • Edge cases covered
  • Error scenarios tested
  • Tests are readable and well-organized

Step 4: Debugging with AI (10 minutes)

Introduce a deliberate bug and practice AI-assisted debugging:

Your Prompt:

I'm getting this error when running my tests: [paste actual error]. Help me debug this issue step by step:

1. Analyze the error message
2. Identify likely root causes
3. Suggest debugging steps
4. Provide fixes with explanations

Exercise 2: React Component Development

Duration: 90 minutes
Objective: Build React components using AI for rapid frontend development

Step 1: Component Planning (15 minutes)

Your Prompt:

I need to create a Task Management Dashboard React component with:

FEATURES:
- Task list with filtering (status, priority)
- Add new task form
- Edit task inline
- Delete tasks with confirmation
- Search functionality
- Responsive design

TECHNICAL:
- TypeScript React components
- Custom hooks for state management
- Proper error handling and loading states
- Accessibility features
- Unit tests with React Testing Library
- Styled with CSS modules or styled-components

Step 2: Component Implementation (45 minutes)

Custom Hooks

Your Prompt:

Create custom React hooks for task management:
1. useTasks - manages task CRUD operations and state
2. useTaskFilters - handles filtering and search logic
3. useDebounce - debounces search input
4. Include proper TypeScript types and error handling

Task Components

Your Prompt:

Create React components:
1. TaskList - displays filtered tasks with loading/error states
2. TaskItem - individual task with inline edit capability
3. TaskForm - form for creating/editing tasks with validation
4. TaskFilters - filter controls for status/priority/search
5. Include proper TypeScript props and accessibility

Quality Review:

  • Components properly typed
  • Accessibility attributes included
  • Error boundaries implemented
  • Loading states handled

Step 3: Component Testing (20 minutes)

Your Prompt:

Create React Testing Library tests for the task components:
1. Test user interactions (click, type, submit)
2. Test state changes and API calls
3. Test error handling and loading states
4. Test accessibility features
5. Use proper testing patterns and mocking

Step 4: AI-Assisted Styling (10 minutes)

Your Prompt:

Create modern, responsive CSS for the task management components:
1. Mobile-first responsive design
2. Clean, professional appearance
3. Loading and error state styling
4. Form validation visual feedback
5. Accessibility-compliant colors and focus states

Exercise 3: Full-Stack Integration

Duration: 120 minutes
Objective: Connect frontend and backend with AI assistance for complete workflow

Step 1: API Integration (30 minutes)

Your Prompt:

Create an API service layer for the React app:
1. HTTPClient class with error handling
2. TaskAPI service with all CRUD methods
3. Proper TypeScript types for requests/responses
4. Request/response interceptors
5. Loading state management
6. Error handling with user-friendly messages

Step 2: State Management (30 minutes)

Your Prompt:

Implement state management for the task app:
1. Create React Context for global task state
2. useReducer for complex state updates
3. Optimistic updates for better UX
4. Error rollback functionality
5. Proper TypeScript types throughout

Step 3: Advanced Features (45 minutes)

Your Prompt:

Add advanced features to the task app:
1. Real-time task updates (simulated)
2. Bulk operations (select multiple, bulk delete)
3. Task drag-and-drop reordering
4. Export tasks to CSV/JSON
5. Dark/light theme toggle
6. Local storage persistence

Step 4: Performance Optimization (15 minutes)

Your Prompt:

Optimize the React app performance:
1. Implement React.memo for expensive components
2. Use useMemo and useCallback appropriately
3. Add virtual scrolling for large task lists
4. Implement code splitting with lazy loading
5. Add performance monitoring

Exercise 4: AI-Assisted Debugging and Refactoring

Duration: 60 minutes
Objective: Practice using AI for debugging and code improvement

Step 1: Bug Hunting (20 minutes)

Introduce realistic bugs and practice AI-assisted debugging:

Debugging Scenarios:

  1. State Update Bug: Tasks not updating in UI after API call
  2. Memory Leak: useEffect not cleaning up properly
  3. Type Error: Incorrect TypeScript interface usage
  4. Performance Issue: Unnecessary re-renders

Your Debugging Prompt Template:

I have a bug in my React component where [describe the issue]. Here's the relevant code:

[paste code]

The expected behavior is [describe expected], but what's happening is [describe actual].

Please help me:
1. Analyze the code and identify the likely cause
2. Explain why this bug occurs
3. Provide a fix with explanation
4. Suggest how to prevent similar bugs

Step 2: Code Review with AI (20 minutes)

Your Code Review Prompt:

Please review this code for:
1. Code quality and best practices
2. Potential bugs or edge cases
3. Performance optimizations
4. TypeScript improvements
5. Security considerations
6. Testability improvements

[paste your code]

Provide specific recommendations with code examples.

Step 3: Refactoring Practice (20 minutes)

Your Refactoring Prompt:

Help me refactor this code to improve:
1. Readability and maintainability
2. Performance
3. Type safety
4. Reusability
5. Testing

Current code:
[paste code]

Please provide the refactored version with explanations for each change.

Exercise 5: Documentation and Deployment

Duration: 60 minutes
Objective: Use AI to generate documentation and deployment configurations

Step 1: API Documentation (20 minutes)

Your Prompt:

Generate comprehensive API documentation for my Task Management API:
1. OpenAPI/Swagger specification
2. Endpoint descriptions with examples
3. Request/response schemas
4. Error response documentation
5. Authentication details (if applicable)
6. Usage examples in multiple languages

Step 2: README and Guides (20 minutes)

Your Prompt:

Create a comprehensive README.md for the Task Management project:
1. Project description and features
2. Installation and setup instructions
3. API documentation
4. Frontend usage guide
5. Development workflow
6. Testing instructions
7. Deployment guide
8. Contributing guidelines

Step 3: Deployment Configuration (20 minutes)

Your Prompt:

Create deployment configurations for the task management app:
1. Dockerfile for the Node.js API
2. Docker Compose for development
3. Kubernetes deployment manifests
4. GitHub Actions CI/CD pipeline
5. Environment configuration templates
6. Production security considerations

Quality Assessment Rubric

Evaluate your AI-assisted development using this rubric:

Code Quality (25 points)

  • TypeScript Usage (5 pts): Proper types, interfaces, no any
  • Error Handling (5 pts): Comprehensive error handling throughout
  • Code Organization (5 pts): Clear structure, separation of concerns
  • Best Practices (5 pts): Follows language/framework conventions
  • Documentation (5 pts): Code is well-documented

Functionality (25 points)

  • Requirements Met (10 pts): All specified features implemented
  • Edge Cases (5 pts): Handles edge cases and invalid inputs
  • User Experience (5 pts): Smooth, intuitive user interactions
  • Performance (5 pts): Performs well under normal usage

Testing (25 points)

  • Test Coverage (10 pts): Comprehensive test coverage
  • Test Quality (10 pts): Tests are meaningful and well-written
  • Integration Tests (5 pts): Components work together correctly

AI Collaboration (25 points)

  • Prompt Quality (5 pts): Clear, specific prompts
  • Output Review (5 pts): Critical evaluation of AI suggestions
  • Iteration (5 pts): Effective back-and-forth with AI
  • Quality Control (5 pts): Maintained quality despite AI assistance
  • Learning (5 pts): Demonstrated learning from AI interactions

Common AI Prompting Patterns

Effective Prompting Techniques

  1. Context-Rich Prompts
I'm building a [project type] with [technology stack]. 
Requirements: [specific requirements]
Constraints: [technical constraints]
Please help me [specific request]
  1. Incremental Development
Based on the previous code, now I need to add [new feature].
Consider [specific requirements].
Maintain consistency with [existing patterns].
  1. Code Review Requests
Please review this code for [specific aspects].
Focus on [priorities].
Provide specific suggestions with examples.
  1. Debugging Assistance
I'm getting [specific error].
Expected behavior: [description]
Actual behavior: [description]
Relevant code: [code snippet]
Help me debug this step by step.

Quality Control Questions

Ask yourself after each AI interaction:

  • Does this solution meet all requirements?
  • Is the code maintainable and readable?
  • Are there potential security issues?
  • Is error handling comprehensive?
  • Are the TypeScript types correct?
  • Would this pass code review?

Troubleshooting Common Issues

AI Generates Low-Quality Code

Solution:

  • Provide more specific requirements
  • Ask for best practices explicitly
  • Request TypeScript types
  • Ask for error handling

AI Suggestions Don't Match Requirements

Solution:

  • Break down complex requests
  • Provide examples of desired patterns
  • Reference specific frameworks/libraries
  • Clarify constraints and preferences

AI Code Doesn't Work as Expected

Solution:

  • Test incrementally
  • Ask AI to debug specific issues
  • Request explanation of complex logic
  • Verify against documentation

Reflection and Next Steps

After completing the exercises:

  1. Self-Assessment:

    • What AI prompting techniques worked best?
    • Where did you need to provide additional guidance?
    • What quality issues did you catch that AI missed?
  2. Improvement Areas:

    • Practice more specific prompting
    • Learn to better evaluate AI suggestions
    • Develop stronger code review skills
  3. Integration into Daily Work:

    • Identify workflow integration points
    • Establish quality control processes
    • Share learnings with team members

Advanced Challenges

Challenge 1: Microservices Architecture

Use AI to design and implement a microservices version of the task management system.

Challenge 2: Performance Optimization

Use AI to identify and fix performance bottlenecks in a large-scale application.

Challenge 3: Security Hardening

Use AI to implement comprehensive security measures and conduct security reviews.

Challenge 4: Mobile App Development

Use AI to create a React Native version of the task management app.


These exercises provide hands-on experience with AI-assisted development. Focus on learning to collaborate effectively with AI while maintaining high code quality standards.