New Project Onboarding

When you spawn a new project from this monorepo starter template, you need to register it with the WS.Eng tooling so that the CLI, codemap, context packs, and documentation workflows all work correctly.

This guide covers the steps to go from a freshly forked/cloned repo to a fully integrated WS.Eng project.

Prerequisites

  • WS.Eng CLI installed and linked globally (wseng --help to verify). See ws-eng-cli README for installation.
  • GitHub CLI (gh) authenticated with access to the trilogy-group organization.
  • Access to the WS.Eng AWS account (856284715153) with a configured AWS CLI profile.

Step 1: Replace Placeholders

Follow the Setup instructions in the README to replace all template placeholders:

  • mna-ofns-bd → your short project name (max 12 chars, lowercase, dash-separated)
  • mna-offense-board → your full project name
  • M&A Offense Board → your project display name

Step 2: Initialize the WS.Eng CLI

From the root of your new repository:

wseng init

This command does several things:

  1. Creates .wseng in the repo root if it doesn't already exist (the starter template ships with one, so init will skip this step and preserve the existing file).
  2. Configures your local environment: sets up AWS credentials, Google auth, and Cursor CLI tools.
  3. Adds entries to .gitignore: .context, .aider*, .vscode/settings.json.
  4. Registers the repo in your personal CLI config (localPaths): maps the repo name to its local path. This is critical — wseng ws build-codemap uses localPaths to discover repositories when run outside a git repo.

Verify and Update the .wseng Configuration

After initialization, open the .wseng file in the repo root. The starter ships with default values, but you need to customize it for your project:

{
  "contextFolder": ".context",
  "specs": [],
  "pullRequestTemplate": "{{ ticketUrl }}",
  "defaultGitHubRepo": "your-project-name",
  "commandConfigurations": {
    "release-notes": {
      "defaultInputs": {
        "chatSpace": "Your Chat Space Name",
        "workflowName": "Deploy Production"
      }
    }
  }
}

Key fields to configure:

  • defaultGitHubRepo: Set this to your repository name (e.g., your-project) so ticket commands can resolve numeric-only IDs without the repo/ prefix. This field is not included in the default template — you must add it manually.
  • contextFolder: Defaults to ".context". This is where the CLI stores per-ticket working state (estimations, ticket metadata). It is not where context packs live (see Step 3).
  • commandConfigurations.release-notes: Update the chatSpace to your project's Google Chat space name.

Step 3: Create Context Packs

Context packs are Markdown files with YAML front matter that describe your project's products, modules, and playbooks. The CLI discovers them by filename pattern from anywhere in the repository — they are not stored in a single config file. Place them in your docs/ directory (or any subdirectory) for discoverability.

How Context Packs Are Discovered

The CLI scans the entire repository root for files matching these glob patterns:

Pattern Level Purpose
**/L1.*.md Product Defines a product (top-level domain)
**/L2.*.md Module Defines a module within a product
**/L3.*.md Function Documents a specific function within a module
**/PB.*.md Playbook Defines a runbook/playbook for a product

Files matching **/*template.md are excluded.

L1 — Product Context Pack (Required)

Every project needs at least one L1 file. This defines the product that the codemap and CLI commands reference.

Filename: L1.<product-name>.context.md (e.g., L1.my-project.context.md)

Required front matter:

---
type: product
name: my-project
description: 'A brief description of your product'
docsRoot: docs/
scripts:
  check: []
  fix: []
  test: []
files:
  cicd:
    include: ['.github/**']
  docs:
    include: ['docs/**']
  iac:
    include: ['apps/infra/**']
---
Field Required Description
type Yes Must be "product"
name Yes Product name (used as the key in the codemap)
description No Human-readable description
docsRoot No Root directory for documentation files
scripts No check, fix, test script arrays (each entry has cwd and script)
files No Glob patterns for cicd, docs, iac file categories

The Markdown body below the front matter is the product's context content — used by the CLI when generating code or answering questions.

L2 — Module Context Pack (Required for Codemap Modules)

Each module in your project gets an L2 file. Modules must reference an existing L1 product by name.

Filename: L2.<module-name>.context.md (e.g., L2.backend.context.md)

Required front matter:

---
type: module
name: backend
product: my-project
description: 'Hono REST API backend with DynamoDB'
scripts:
  check: []
  fix: []
  test:
    - cwd: '.'
      script: 'pnpm test'
files:
  sources:
    include: ['apps/backend/**']
    exclude: ['apps/backend/**/*.test.ts']
  cicd:
    include: []
  docs:
    include: []
  iac:
    include: ['apps/infra/constructs/backend.construct.ts']
---
Field Required Description
type Yes Must be "module"
name Yes Module name (must be unique within the product)
product Yes Must match the name of an L1 product context pack
description No Human-readable description
files.sources Yes Glob patterns for this module's source files (include/exclude)
files.cicd, files.docs, files.iac No Glob patterns for related file categories
scripts No check, fix, test script arrays

L3 — Function Context Pack (Optional)

L3 files document individual functions within a module. They do not require YAML front matter — the CLI identifies the product and module from the filename.

Filename pattern: L3.<module-name>.<function-name>.context.md

Example: L3.backend.create-user.context.md

The CLI matches <module-name> to an existing L2 module. If no module match is found but only one product and one module exist in the repo, the L3 file is auto-associated with them.

PB — Playbook Context Pack (Optional)

Playbooks document operational procedures, troubleshooting runbooks, or process guides.

Filename: PB.<playbook-name>.md (e.g., PB.deployment-rollback.md)

Required front matter:

---
type: playbook
name: deployment-rollback
product: my-project
description: 'How to roll back a failed production deployment'
---
Field Required Description
type Yes Must be "playbook"
name Yes Playbook name
product Yes Must match the name of an L1 product context pack
description No Human-readable description

Place context packs inside docs/ to keep them organized and co-located with your project documentation:

docs/
├── L1.my-project.context.md        # Product definition
├── L2.backend.context.md           # Backend module
├── L2.frontend.context.md          # Frontend module
├── L2.infrastructure.context.md    # Infrastructure module
├── L3.backend.create-user.context.md  # Function-level (optional)
├── PB.deployment-rollback.md       # Playbook (optional)
├── guides/                         # Scalar documentation guides
│   ├── introduction.md
│   └── ...
└── assets/

Additional Context Sources

  • Cursor rules in .cursor/rules/ also act as context for AI-assisted development. The starter includes rules for naming conventions, dependency injection, backend patterns, and more. Review and customize these for your project.
  • Additional document types can be placed in named subdirectories (second-brains/, quality-bars/, playbooks/, brain-lifts/) for discovery by the sync-context-documents command.

Step 4: Build the Codemap

The codemap is a structural index that maps your repository's products and modules. The CLI uses it for code navigation, dependency analysis, and AI-assisted development.

Prerequisites

Before building the codemap, you must have created:

  1. At least one L1 (product) context pack with valid front matter (type: product, name)
  2. At least one L2 (module) context pack with valid front matter (type: module, name, product)

The build-codemap command calls loadMetadataFromContextPacks() which parses L1 and L2 files (L3 files are skipped during codemap generation). For each L1 file, it creates a CodeMapProduct entry; for each L2 file, it creates a CodeMapModule entry linked to its product.

Building the Codemap

From the repository root:

wseng ws build-codemap

This will:

  1. Scan the repository for L1 and L2 context pack files
  2. Parse their YAML front matter to extract product and module metadata
  3. Build a codemap structure: { repo → { products, modules } }
  4. Push the codemap to GitHub (via the CodemapService)

If you are not inside a git repository when you run this command, it will instead scan all repositories defined in your personal CLI configuration (localPaths).

Verify Before Pushing

Run without pushing to verify the codemap locally first:

wseng ws build-codemap --no-push

What the Codemap Contains

For each repository, the codemap records:

Entity Fields Source
Product name, description, url, repo, modules[] L1 context pack front matter
Module name, product, description, url, repo L2 context pack front matter

The codemap enables commands like:

  • wseng context-tree — Generate dependency trees for files
  • wseng question — Answer questions about the codebase
  • wseng implement — AI-assisted implementation using codebase understanding

When to Rebuild

Rebuild the codemap after:

  • Adding or removing L1/L2 context pack files
  • Renaming products or modules in context pack front matter
  • Adding new workspace packages or backend modules
  • Major refactors that change directory structure

Step 5: Register in the Team Roster

The WS.Eng Team Roster is a Google Sheet that tracks all active projects, their repositories, and team assignments. Your project must be registered here for cross-project CLI commands to work (e.g., wseng ws clone-repositories, wseng sync-context-documents, wseng ws project-updates).

Access: The Team Roster requires membership in the WorkSmart Engineering Google group. If you don't have access, request it in the "WS.Eng Access Requests" Google Chat space.

Adding Your Project

There is no CLI command to add entries to the Team Roster — it is managed manually in the Google Sheet. You will need to add rows to the following sheets:

  1. Projects sheet — Optional, add a row with:

    • ID: A unique project identifier (e.g., YourProject)
    • Short Name: An abbreviation (e.g., YP)
    • Friendly Name: Human-readable project name
    • Company: The parent company
    • Product: The product name (should match the name in your L1 context pack)
  2. Repositories sheet — Add a row for your new repository with:

    • Name: Repository name (e.g., your-project)
    • Description: Brief description
    • URL: Full GitHub URL (e.g., https://github.com/trilogy-group/your-project)
    • Type: Repository type
    • Project: Must match the Project ID from the Projects sheet

After Registration

Once registered, team members can clone all project repositories at once:

wseng ws clone-repositories

Registration also enables the wseng sync-context-documents command to discover your repository's context packs and catalog them in the Team Roster's "Context Documents" sheet. This is a team-level maintenance command (not a project setup step) that scans all registered repositories for L1/L2/L3 context pack files and documents in second-brains/, quality-bars/, playbooks/, and brain-lifts/ directories.

Step 6: Set Up Scalar Documentation

Documentation is published to Scalar for both production and integration environments. Each environment gets its own Scalar project, custom domain, and environment-specific configuration (API URL, OAuth, logo). Ephemeral (PR) environments do not publish docs.

6a. Create Scalar Projects

Two Scalar projects are needed — one for production, one for integration. Scalar does not support project deletion via CLI or API, so these are permanent.

Prerequisites: Node.js 24+, @scalar/cli installed globally.

npm i -g @scalar/cli

# Authenticate with the team Scalar token (obtain from Scalar dashboard or AWS Secrets Manager)
npx @scalar/cli auth login --token=<SCALAR_TOKEN>

# Create production project
npx @scalar/cli project create --name "Your Project Docs" --slug your-project-docs

# Create integration project
npx @scalar/cli project create --name "Your Project Docs (Integration)" --slug your-project-docs-int

6b. Add DNS CNAME Records

Add two CNAME records in your project's Route 53 hosted zone, both pointing to dns.scalar.com:

Record Name Type Value
your-project-docs.<your-domain> CNAME dns.scalar.com
your-project-docs-int.<your-domain> CNAME dns.scalar.com

For reference, the monorepo starter uses the wseng.rp.devfactory.com hosted zone in AWS account 856284715153.

6c. Configure Custom Domains in Scalar Dashboard

DNS alone is not sufficient — Scalar must also be told about the custom domains:

  1. Go to https://dashboard.scalar.com
  2. Open the production project → Settings → Custom Domain → enter your-project-docs.<your-domain>
  3. Open the integration project → Settings → Custom Domain → enter your-project-docs-int.<your-domain>

6d. Prepare Logo Assets

Logos are stored in an S3 bucket and referenced by URL. Prepare two logos:

  • Production logo (docs/assets/logo.svg) — default branding
  • Integration logo (docs/assets/logo-int.svg) — should include a visual "STAGING" indicator to distinguish from production

6e. Update .doc.json

Update the configuration to match your project:

{
  "meta": {
    "title": "Your Project Docs",
    "description": "Documentation for Your Project.",
    "logo": "https://wseng-docs.s3.us-east-1.amazonaws.com/scalar/assets/<your-project>/logo.svg"
  },
  "content": {
    "references": [
      {
        "type": "openapi",
        "slug": "api-reference",
        "name": "API Reference",
        "specPath": "https://api-<your-project>.<your-domain>/docs/openapi.json",
        "config": {
          "ignoreTags": ["Debug"]
        }
      }
    ],
    "assetsDir": "docs/assets"
  },
  "targets": {
    "scalar": {
      "projectSlug": "your-project-docs",
      "customDomain": "your-project-docs.<your-domain>"
    }
  }
}

Key fields to update:

  • meta.title and meta.description — your project name
  • meta.logo — S3 URL from step 6d
  • specPath — your production API's OpenAPI endpoint
  • targets.scalar.projectSlug — must match the slug from step 6a
  • targets.scalar.customDomain — must match the domain from step 6b/6c

6f. Update Workflow Overrides

In base-deploy.yml, the publish-docs job dynamically generates overrides for each environment. Update the "Compute Scalar overrides" step with your project's integration domain and the "Generate overrides file" step with your integration logo URL.

6g. Update .doc.overrides.env for Local Publishing

The .doc.overrides.env file is committed to the repo so the whole team can publish staging/integration docs from their local machine. Update it with your project's integration-specific values:

targets.scalar.projectSlug=your-project-docs-int
targets.scalar.customDomain=your-project-docs-int.<your-domain>
references.api-reference.specPath=https://api-<your-project>-integration.<your-domain>/docs/openapi.json
meta.logo=https://wseng-docs.s3.us-east-1.amazonaws.com/scalar/assets/<your-project>/logo-int.svg

Then publish locally with:

wseng cicd publish-docs --overrides .doc.overrides.env

To publish to production instead, omit the --overrides flag (.doc.json defaults are used):

wseng cicd publish-docs

6h. Trigger Initial Deploys

Before merging, publish both docs sites once via the CLI to ensure the Scalar projects are populated and custom domains are active:

# Publish production docs (uses .doc.json defaults)
wseng cicd publish-docs

# Publish integration docs (uses overrides)
wseng cicd publish-docs --overrides .doc.overrides.env

After merging the docs configuration to main, CI takes over:

  1. Integration docs auto-publish on merge (via deploy-integration.yml)
  2. Production docs require a manual trigger: GitHub → Actions → Deploy Production → Run workflow

Step 7: Seed Your AWS Account

Your AWS account needs shared prerequisites (created once, reused across all environments). The CI pipelines handle this automatically — the deploy and destroy workflows run account:seed before every CDK operation, so the first pipeline run on a fresh account provisions everything.

For local development, the update-env script handles this automatically — it calls account:seed internally and upserts the resulting env vars into your .env:

pnpm script update-env --env integration

You can also run account seeding standalone if needed:

pnpm run account:seed >> .env

This is idempotent — policies are content-addressed (wseng-auto-policy-<hash>), so identical configs across projects and environments resolve to the same policy automatically. Stale policies are swept during post-deploy (CloudFront prevents deletion of any policy still attached to a distribution). Currently it provisions:

  • CloudFront Origin Request Policies for Mixpanel and Sentry analytics proxy behaviors (defined in apps/infra/policies/)

Extending: Add new policy entries to ORIGIN_REQUEST_POLICIES in apps/infra/policies/origin-request-policies.constant.ts. The env var is automatically included in destroy-safe synthesis verification.

Step 8: Configure CI/CD

The CI/CD workflows reference specific values that need updating for your project:

  1. AWS IAM Role ARN in base-deploy.yml — Update the default aws-role to your project's deploy role.
  2. Secret ID — Create secrets in AWS Secrets Manager following the structure in the README.
  3. Domain names — Update apps/infra/app.ts with your project's domain and subdomain names.
  4. Certificate ARNs — Update apps/infra/app.ts with your project's ACM certificate ARNs.

Checklist

Use this checklist when onboarding a new project:

  • Replace all placeholders (mna-ofns-bd, mna-offense-board, M&A Offense Board)
    • The .sync/config.json file contains the known patterns that we should use and would be replaced in downstream forks.
  • Run wseng init and configure .wseng
  • Set defaultGitHubRepo in .wseng
  • Create an L1 product context pack (docs/L1.<product>.context.md) with valid front matter
  • Create L2 module context packs (docs/L2.<module>.context.md) for each module
  • Run wseng ws build-codemap --no-push to verify context packs parse correctly
  • Run wseng ws build-codemap to push the codemap to GitHub
  • Ensure access to the WS.Eng Team Roster (join WorkSmart Engineering Google group if needed)
  • Add project to the Team Roster: Projects, Repositories, and People sheets
  • Create Scalar projects for production and integration (scalar project create)
  • Add DNS CNAME records pointing to dns.scalar.com
  • Configure custom domains in Scalar dashboard for both projects
  • Upload production and integration logos to S3
  • Update .doc.json with project slug, custom domain, spec URL, and logo URL
  • Update base-deploy.yml overrides with integration domain and logo URL
  • Update .doc.overrides.env with integration values for local publishing
  • Run pnpm script update-env --env integration for local dev (seeds account prerequisites and syncs stack outputs)
  • Create AWS Secrets Manager entries
  • Update apps/infra/app.ts with domain names, certificate ARNs, and AWS accounts
  • Deploy the integration environment (git push to main)
  • Verify docs publish on first integration deploy