Chester

Welcome to the Chester Programming Language documentation! Chester is a modern, expressive language designed to combine the best features of functional and object-oriented programming paradigms.

This programming language is under active development and not ready for use. Many important features are not implemented yet, including some described in this documentation.

What is Chester?

Chester is a statically-typed language that aims to provide a balance between expressiveness and safety. It draws inspiration from languages like Scala, Haskell, and Rust, while introducing its own unique features.

Some key characteristics of Chester include:

  • Strong type system with type inference
  • Support for both functional and object-oriented programming styles
  • Pattern matching and algebraic data types
  • Effect system for managing side effects
  • Unicode support, allowing for expressive identifiers

A Glimpse of Chester

Let’s take a look at a simple Chester program to get a feel for the language:

module 😿😿;

def me: String = "インターネット・エンジェル";

Chester Development Documentation

This section contains technical documentation for Chester’s implementation details and development notes.

These documents are made for both llm and human.

Documentation Structure

We use mdBook for organizing and presenting our documentation. The documentation is structured as follows:

Documentation Management

The documentation structure is managed through several tools:

  1. SUMMARY.md Generation:

    • The SUMMARY.md file is automatically generated using the dev.sh script
    • To update the summary: cd docs && ./dev.sh summary
    • Do not edit SUMMARY.md directly as changes will be overwritten
  2. Building Documentation:

    • Use mdBook to build and preview changes
    • The dev.sh script provides various documentation management commands

Contributing

When adding new development documentation:

  1. Create your markdown file in the appropriate subdirectory under docs/src/dev/
  2. Place development-related documentation in the dev/ directory
  3. Follow the existing documentation style and structure
  4. Include code examples where appropriate
  5. Update this README.md if adding new major components
  6. Run ./dev.sh summary to update the documentation structure

TODO

parse a.b.

import a.b.{x,y};

prompt

create a concise prompt introducing the language so that large langiage model can read and write chester

abstract syntax

maybe create a trait and let graalvm’s syntax tree variant implement it

block type leak

{let a = Int; let b: a = 1; b}

Chester Compiler Backend Architecture

Overview

This document outlines the backend architecture of the Chester compiler system. The backend is responsible for transforming Chester’s internal representation into executable code for various target platforms.

Backend Pipeline

The Chester compiler backend follows a multi-phase code generation pipeline:

Core Representation → Target AST Generation → Optimization → Code Generation → Executable Artifacts

Backend Components

The backend is composed of several key components:

Target-Specific AST Generation

Each target platform has a specialized AST generation phase:

  • Target AST Construction: Builds an AST specific to the target language
  • Type Mapping: Maps Chester types to target language types
  • Effect Translation: Converts Chester effects to target language constructs
  • Standard Library Binding: Connects to platform-specific libraries

Optimization Engine

The backend applies target-specific optimizations:

  • Dead Code Elimination: Removes unused code
  • Constant Folding: Evaluates constant expressions at compile time
  • Inlining: Replaces function calls with function bodies where beneficial
  • Specialization: Creates specialized versions of generic functions
  • Tail Call Optimization: Optimizes tail-recursive calls

Code Generation

The final phase transforms the optimized target AST to executable code:

  • Pretty Printing: Generates formatted source code
  • Native Code Generation: For targets that compile directly to machine code
  • Bytecode Generation: For VM-based targets like the JVM
  • Source Maps: Generates debugging information

Supported Compiler Targets

Chester supports multiple compiler targets, each with its own backend implementation:

JavaScript/TypeScript

The JavaScript/TypeScript target (compiler/shared/src/main/scala/chester/targets/js/AST.scala) enables running Chester programs in web browsers and Node.js.

Architecture

The JavaScript backend consists of:

  1. AST Nodes: Comprehensive representation of JavaScript/TypeScript constructs
  2. Type Translator: Maps Chester types to TypeScript type annotations
  3. Effect Handler: Implements Chester’s effect system using JavaScript promises
  4. Module System: Generates ES module exports and imports

Key Features

  • Complete JavaScript language support
  • TypeScript type annotations
  • ECMAScript module system
  • Web API integration
  • Sourcemap generation for debugging

JS AST Structure

The JavaScript AST is implemented as a set of case classes:

sealed trait ASTNode extends ToDoc {
  val meta: Option[Meta]
}

sealed trait Expression extends ASTNode
sealed trait Statement extends ASTNode
sealed trait Declaration extends Statement
sealed trait TypeAnnotation extends ASTNode

Each node includes a toDoc method for pretty printing and source generation.

JVM (Java/Scala)

The JVM target enables integration with the Java ecosystem and leverages the JVM runtime.

Architecture

The JVM backend consists of:

  1. Bytecode Generation: Direct generation of JVM bytecode
  2. Class Builder: Creates JVM class files
  3. Runtime Library: Core runtime support for Chester on JVM
  4. Java Interop: Enables calling Java code from Chester

Key Features

  • Java interoperability
  • Scala library integration
  • JVM optimizations
  • Access to the Java standard library
  • Advanced JVM optimizations (inlining, specialization)

Native (Planned)

A native code target is planned for high-performance applications.

Potential Architecture

The native backend will likely include:

  1. LLVM IR Generation: Translation to LLVM intermediate representation
  2. Native Runtime: Minimal runtime support library
  3. ABI Compatibility: Interoperability with C/C++ code
  4. Platform Support: Cross-compilation for different CPU architectures

Planned Features

  • LLVM-based code generation
  • Native performance
  • Low-level memory control
  • System programming capabilities
  • Cross-platform support

Type System Mapping

Chester’s rich type system needs careful mapping to target language types:

Chester TypeJavaScript/TypeScriptJVMNative (Planned)
Integernumberscala.BigIntint64_t
Naturalnumberscala.BigIntuint64_t
Booleanbooleanscala.Booleanbool
Stringstringjava.lang.Stringstd::string
Union Types (AB)A | BSpecialized classes
Recordinterface/classcase classstruct
FunctionsfunctionFunction objectsFunction pointers

Effects Handling

Chester’s effect system is implemented differently for each target language:

  • JavaScript/TypeScript: Using promises or custom effect handlers
  • JVM: Using exceptions and monadic structures
  • Native: Using error codes or custom effect handling

Implementation Example: JavaScript Backend

JavaScript/TypeScript AST Example

The JavaScript target provides a good example of target-specific AST:

// Example: Function declaration in JavaScript AST
FunctionDeclaration(
  id = Some(Identifier("greet")),
  params = List(Parameter(TypedIdentifier("name", StringTypeAnnotation()))),
  returnType = Some(StringTypeAnnotation()),
  body = BlockStatement(List(
    ReturnStatement(Some(
      BinaryExpression(
        BinaryOperator.Plus,
        StringLiteral("Hello, "),
        Identifier("name")
      )
    ))
  ))
)

This represents the TypeScript function:

function greet(name: string): string {
  return "Hello, " + name;
}

JavaScript AST Node Categories

The JavaScript AST supports a wide range of node types:

Expressions

  • Literals: Numbers, strings, booleans, null, BigInt, RegExp
  • Identifiers: Named references (typed and untyped)
  • Operators: Binary, logical, assignment, unary, update
  • Function Expressions: Regular functions and arrow functions
  • Object and Array Expressions: Object literals and array literals
  • Class Expressions: Class definitions with inheritance and method definitions

Statements

  • Block Statements: Groups of statements
  • Expression Statements: Expressions used as statements
  • Control Flow Statements: if/else, while, do-while, for, switch
  • Declaration Statements: let, const, var declarations

TypeScript Features

  • Type Annotations: For variables, parameters, return types
  • Interface and Type Declarations: For defining complex types
  • Generics: Type parameters for functions and classes
  • Union and Intersection Types: Type combinations

Build System Integration

The Chester compiler backend integrates with build systems through:

  • SBT Plugin: For JVM builds
  • NPM Package: For JavaScript/TypeScript integration
  • CLI Interface: For command-line usage

Future Directions

Planned improvements to the compiler backend include:

  • WebAssembly Support: Direct compilation to WebAssembly
  • More Native Targets: Support for various native platforms
  • Interoperability Enhancements: Better interop with target languages
  • Performance Optimizations: Target-specific optimizations
  • Cross-Compilation: Single-command compilation to multiple targets
  • Advanced Optimizations: Target-specific performance improvements

References

Development Memo

NOTE THAT ALL CODE AND DOCUMENTS CAN AND WILL BE OUTDATED OR CONFLICTING. ANALYSE INFORMATION AVAIABLE TO YOU CAREFULLY

Documentation Guidelines

  1. Avoid Meaningless Adjectives

    • NEVER USE subjective qualifiers like: “better”, “improved”, “good”, “great”, “enhanced”, “advanced”, “beautiful”, “powerful”, “robust”, “excellent”, “high-quality”
    • ALWAYS USE factual, specific, measurable descriptions
    • Describe concrete characteristics and behaviors
    • Focus on correctness and functionality first
  2. Focus on Facts

    • Document what something IS, not how “good” you think it is
    • Identify concrete capabilities and limitations
    • Omit subjective assessments and emotional language (BUT EMPHASIZE ON WHAT ALWAYS DID WRONG BY DEVELOPERS IS OK LIKE EMOJI EMPHASIZE AND UPPERCASE AND BOLD)
    • Avoid superlatives and value judgments
  3. Eliminate Fluff Phrases

    • Remove sentences that don’t add information
    • Avoid concluding paragraphs that just say “this is useful”
    • Don’t add generic statements about quality or value
    • Delete phrases like “comprehensive framework”, “elegant solution”, etc.
  4. Be Specific and Concrete

    • Instead of “improved performance”, describe the specific optimization technique
    • Instead of “enhanced error reporting”, specify exactly what information is included in errors
    • Replace “powerful features” with a specific list of capabilities

Development Practices

Planning Changes

  1. Document Before Implementing

    • Always document the steps you plan to take BEFORE making any code changes
    • Break down complex changes into clearly defined smaller steps
    • For each step, explain:
      • What will be changed
      • Why the change is needed
      • How the change relates to the larger goal
      • What tests will verify the change
    • Review your plan for completeness before starting implementation
    • Document any deviations from the plan that occur during implementation
  2. Use Step-by-Step Implementation

    • After documenting your plan, implement one step at a time
    • Run the full test suite (sbt rootJVM/test) after each step
    • Commit logical units of work with clear messages
    • Do not proceed to the next step until the current step passes all tests

Making Changes

  1. Keep Changes Small and Focused

    • Make one logical change at a time
    • Break down large changes into smaller, independent steps
    • Each change should be easily reviewable and testable
  2. Testing Requirements

    • ALWAYS use the following commands for running tests:
      # Run all tests from the root project
      sbt rootJVM/test
      
      # Run a specific test class from the root project
      sbt "rootJVM/testOnly chester.tyck.FilesTyckTest"
      
      # You can also run tests for specific modules when needed
      sbt reader/test
      sbt semantic/test
      sbt cli/test
      
      # Run specific test classes in modules
      sbt "reader/testOnly chester.reader.ReaderTest"
      sbt "semantic/testOnly chester.tyck.TheTyckTest"
      sbt "cli/testOnly chester.cli.CLITest"
      
    • DO NOT navigate into subdirectories to run tests (e.g., cd reader && sbt test)
    • ALWAYS run tests from the root project directory
    • ⚠️ CRITICAL: NEVER use the -z test filter option ⚠️
      • -z is NOT the correct syntax for filtering tests in MUnit
      • This option is broken and produces unreliable results
      • Tests may appear to pass when they actually fail
      • This can lead to false confidence in your changes
      • ScalaTest uses -z for filtering, but MUnit uses --tests= instead
    • ⚠️ IMPORTANT: Only use correct MUnit filter syntax with -- ⚠️
      • When filtering tests, always use proper MUnit syntax:
      • CORRECT MUnit syntax examples:
        # Filter by test name
        sbt "rootJVM/testOnly -- --tests=myTestName"
        
        # Filter by glob pattern
        sbt "rootJVM/testOnly -- *MyTest"
        
      • INCORRECT syntax from other frameworks (DO NOT USE):
        # ScalaTest style (WRONG with MUnit)
        sbt "rootJVM/testOnly -- -t MyTest"
        
        # JUnit style (WRONG with MUnit)
        sbt "rootJVM/testOnly -- -n MyTest"
        
        # Custom incorrect style
        sbt "rootJVM/testOnly -- -only file.chester"
        
    • ALWAYS run sbt rootJVM/test before committing changes
    • Fix any test failures before committing
    • Add new tests for new functionality
    • Update existing tests when modifying behavior
    • Test both success and failure cases
    • 💡 Development Tip: For quickly adding and testing new type checking scenarios during development, you can add code snippets to the tests/tyck.chester file. To run only these snippets for rapid feedback, use the specific test class chester.tyck.TheTyckTest. The correct command for this specific development workflow is:
      sbt "semantic/testOnly chester.tyck.TheTyckTest" | cat
      
      This command targets the test directly within its module (semantic), providing a faster feedback loop than running the full suite via rootJVM/test. Note that TheTyckTest is designed for this temporary testing and is often disabled (doTest = false) otherwise. Remember to use sbt rootJVM/test | cat for final verification before committing.
    • 📝 Post-Development Workflow: Once the code in tests/tyck.chester passes type checking with TheTyckTest:
      1. Move the tests/tyck.chester file to the main tests/tyck/ directory.
      2. Give the file a descriptive name reflecting the feature tested (e.g., union-assignment.chester).
      3. Set the doTest flag in semantic/jvm-native/src/test/scala/chester/tyck/TheTyckTest.scala to false to disable this specific test run until needed again. (Note: Test suites like FilesTyckTest automatically discover files in the tests/tyck/ directory, so no explicit addition is needed).
    • For parser changes:
      • Many tests now run against both old and new readers (V1 and V2)
      • Some complex tests currently only run against V1 (original reader)
      • When adding new parser tests:
        • Use parseAndCheckBoth by default for new tests
        • Only use parseAndCheck if testing V1-specific features
        • Document if test is V1-only and why
        • Plan to migrate V1-only tests to V2 when ready
      • Test function usage:
        • parseAndCheck: V1 parser only
        • parseAndCheckBoth: Both V1 and V2 parsers
        • parseAndCheckV1: Deprecated alias for parseAndCheckBoth
      • Recently migrated tests:
        • Basic operator sequence tests
        • Pattern matching tests with uniform symbol treatment
        • Simple expression tests
        • Function call tests
        • Dot notation tests
        • Object tests
        • Tuple tests
        • Vararg tests
        • Floating-point number parsing tests
        • List tests with mixed types
      • Tests still needing migration:
        • Complex operator sequences (prefix, mixfix)
        • Telescope parsing
        • Error handling
        • Source position tracking
    • For type checking changes:
      • Test term preservation in elaborated results
      • Test type-level computation works correctly
      • Test error reporting is accurate
      • Test edge cases and corner cases
  3. Verify Changes with Git

    # After each change - ALWAYS use | cat to prevent terminal control issues:
    git diff | cat            # Review what changed 
    git add <files>          # Stage specific files
    git status | cat         # Verify staged changes 
    git commit -m "..."     # Commit with clear message
    

    ⚠️ Always append | cat to git diff commands to avoid paging issues.

  4. Change Verification Checklist

    • Changes are minimal and focused

    • Git diff shows only intended changes

    • Tests pass after changes

    • Changes align with existing code style

    • Review the git diff output carefully

      # Before committing, ALWAYS verify changes with:
      git diff | cat     
      

      💡 WHY THIS MATTERS: Failure to review diffs properly is the #1 cause of accidental code deletions and introduction of subtle bugs.

    • Reviewing git diff output is essential for catching:

      • Accidental deletions of important methods or logic
      • Unintended modification of critical code
      • Formatting changes that might impact behavior
      • Changes to files you didn’t intend to modify
    • Pay special attention to large diffs that might hide important changes

    • Verify no unrelated changes were included

    • When making multiple changes, review each file’s diff separately for clarity

  5. Post-Commit Verification

    • ⚠️ MANDATORY: Always verify your changes after committing with git diff HEAD^ HEAD | cat
    • Check the diff output carefully to ensure:
      • No unintended changes were included
      • All intended changes were properly committed
      • File renames and deletions are correctly reflected
      • No sensitive or debug code was accidentally committed
      • No accidental deletions of important logic
    • Verify the commit message accurately describes the changes
    • For complex changes involving multiple files, check each file’s changes individually
  6. Git Command Tips

    • Always use | cat with git commands that might trigger paging:
      git diff | cat
      git log | cat
      git show | cat
      
    • This ensures consistent output and avoids interactive paging

Terminal Control with Git Commands

  1. ⚠️ CRITICAL: ALWAYS Use | cat Suffix

    • Git commands that might trigger paging or interactive prompts MUST ALWAYS end with | cat
    • This is a MANDATORY practice, not a suggestion
    • This ensures consistent output and prevents terminal control issues
    • Failure to use | cat is the leading cause of incomplete reviews and missed errors
    • Examples:
      git checkout main | cat
      git merge --no-ff branch | cat
      git log | cat
      git diff | cat
      git show | cat
      git branch | cat
      
  2. Common Git Operations

    # Switching branches
    git checkout main | cat
    git checkout -b new-branch | cat
    
    # Merging
    git merge --no-ff feature-branch | cat
    git merge --abort | cat  # If merge conflicts occur
    
    # Viewing changes
    git status | cat
    git log --oneline | cat
    git show HEAD | cat
    
    # Committing
    git add . | cat
    git commit -m "type: description" | cat
    
  3. Why This Matters

    • Prevents terminal from entering interactive mode
    • Ensures consistent output formatting
    • Avoids getting stuck in pagers like less
    • Makes automation and scripting more reliable

Troubleshooting Development Issues

  1. Recovering from Broken Edit Tools
    • If edit tools in your IDE or development environment are broken/malfunctioning, you can use git to recover:
      # Discard changes to a specific file
      git checkout -- path/to/file | cat
      
      # Discard all changes in the working directory
      git checkout -- . | cat
      
      # Revert to a specific commit
      git checkout [commit-hash] -- path/to/file | cat
      
    • This approach is especially useful when tools that normally handle editing break unexpectedly
    • Always verify what you’re checking out before executing the command to avoid losing important changes

AI Agent Testing Instructions

  1. Terminal Interruption Issues

    • If you are an AI agent working on Chester code and notice:
      • Frequent ^C characters appearing in command output
      • Commands being interrupted prematurely
      • Test results not displaying properly
      • Terminal output being cut off
    • STOP attempting to run tests and:
      • Inform the user about the terminal connection issues
      • Ask the user to run the tests manually
      • Request that the user provide the test results
      • This indicates a problem with the terminal connection, not with the code itself
  2. Test Running Best Practices for AI Agents

    • ALWAYS use these exact commands for running tests:
      # Run all tests
      sbt rootJVM/test | cat
      
      # Run a specific test class (include quotation marks)
      sbt "rootJVM/testOnly chester.tyck.FilesTyckTest" | cat
      
    • NEVER attempt to run tests with other project paths like cli/test, semantic/test, etc.
    • ⚠️ CRITICAL: NEVER use the -z test filter option ⚠️
      • Example of what NOT to do: sbt "rootJVM/testOnly chester.tyck.FilesTyckTest -z myTest"
      • The -z flag is completely broken and will cause misleading results
      • Tests might appear to pass when they should fail
      • Using -z will lead to incorrect conclusions about code behavior
    • ⚠️ CRITICAL: NEVER use -- to pass arguments to tests ⚠️
      • Example of what NOT to do: sbt "rootJVM/testOnly -- -t MyTest"
      • This will cause tests to run incorrectly or not at all
      • No arguments should be passed after the test class name
    • Always run full test suites rather than individual tests when possible
    • Verify that terminal commands execute completely before proceeding
    • If a test command produces an error about not finding the test class:
      • First try the full rootJVM/test command to run all tests
      • Then check if the test class path is correct
      • Do not experiment with different project paths
    • If tests are taking too long to complete, inform the user and suggest they run the tests locally

Term System Architecture

Chester uses a unified term representation architecture to support multiple platforms:

Term Definition Structure

  1. Unified Term Definition
    • All term types are defined in a single file: syntax/shared/src/main/scala/chester/syntax/core/Term.scala
    • This approach simplifies the codebase and eliminates the need for separate platform-specific implementations
    • Each term type follows a consistent pattern with standard methods and field annotations

Import Guidelines

  1. DO use import chester.syntax.core.*
    • This will give you access to all term implementations
// CORRECT
import chester.syntax.core.*

// INCORRECT - unnecessarily specific imports
import chester.syntax.core.BlockTerm
import chester.syntax.core.FCallTerm

Pattern Matching and Type Usage

Use concrete term types directly for pattern matching:

// CORRECT
case t: BlockTerm => {
  val reducedStatements = t.statements.map(stmt => r.reduce(stmt))
  val reducedResult = r.reduce(t.result)
  BlockTerm(reducedStatements, reducedResult, t.meta)
}

Term Type Implementation Pattern

All term types follow a consistent implementation pattern:

case class ExampleTerm(
  @child var field1: Term,        // Use @child for term fields that should be traversed
  @const val field2: String,      // Use @const for non-term fields 
  @const meta: OptionTermMeta
) extends BaseTerm {
  override type ThisTree = ExampleTerm
  
  // Pretty printing method
  override def toDoc(using PrettierOptions): Doc =
    Doc.text("ExampleTerm(") <> field1.toDoc <> Doc.text(")")
  
  // Tree traversal method
  override def descent(f: Term => Term, g: TreeMap[Term]): Term =
    thisOr(copy(field1 = g(field1)))
}

Adding New Term Types

When adding a new term type:

  1. Add it directly to Term.scala
  2. Follow the existing pattern for similar term types
  3. Implement all required methods (toDoc, descent, etc.)
  4. Use correct field annotations (@child, @const)
  5. Extend the appropriate base type (e.g., TypeTerm, ExprTerm)

Example: Adding a New Term Type

For example, to add a new term type for union types:

case class UnionTypeTerm(
  @child var types: Vector[Term],
  @const meta: OptionTermMeta
) extends TypeTerm {
  override type ThisTree = UnionTypeTerm
  override def toDoc(using PrettierOptions): Doc =
    Doc.text("UnionType(") <> Doc.join(Doc.text(", "), types.map(_.toDoc)) <> Doc.text(")")
  override def descent(f: Term => Term, g: TreeMap[Term]): Term =
    thisOr(copy(types = types.map(g)))
}

Key Term Types

The system includes several important term categories:

  1. Expression Terms: Represent runtime values (variables, function calls, literals)
  2. Type Terms: Represent type information (primitive types, function types, union types)
  3. Statement Terms: Represent declarations and control flow (let/def bindings, trait definitions)
  4. Pattern Terms: Represent pattern matching constructs
  5. Special Terms: Represent special language constructs (holes, placeholders)

Each category has a base trait that defines its common behavior.

Why This Matters

  • Simplified Architecture: The unified term definition makes the codebase more maintainable
  • Cross-Platform Compatibility: All platforms use the same term representation
  • Consistent Patterns: All term types follow the same implementation pattern
  • Easier Extensions: Adding new term types follows a clear and consistent approach

Elaboration and Reduction Strategy

Reduction During Type Checking

  1. Keep Original Forms

    • The elaborator MUST preserve original terms in the elaborated result
    • NEVER reduce during elaboration
    • Only use reduction internally during type checking when absolutely necessary
    • This makes the elaborated code identical to source code, making it:
      • Easier to debug
      • Easier to understand
      • Better for error messages
      • More suitable for further transformations
  2. When to Reduce

    • Only TWO places should use reduction:
      1. Type equality checking in unification
      2. Field access checking on type-level terms
    • Use ReduceMode.TypeLevel for these internal reductions
    • NEVER use reduction in elaborated results

Example:

// Original code
def idType(x: Type): Type = x;
let aT = idType(A);
def getA(x: aT): Integer = x.a;

// WRONG - reducing during elaboration:
LetStmtTerm(localv, reducer.reduce(idType(A)), ty, meta)

// RIGHT - keeping original term:
LetStmtTerm(localv, idType(A), ty, meta)

// RIGHT - internal reduction only for field checking:
def checkFieldAccess(recordTy: Term, field: Name): Term = {
  // Use type-level reduction only for checking field existence
  // Keep original term in result
  // ...
}

Reduction Context and Type Checking

  1. Reduction Context Setup

    • Each Context instance provides its own reduction context via toReduceContext
    • This ensures consistent reduction behavior during type checking
    • Allows for future extensions to reduction context
  2. Type-Level Reduction

    • Only reduce type-level terms when necessary for type checking
    • Keep original terms in elaborated results
    • Use ReduceMode.TypeLevel to control reduction behavior
  3. Field Access Checking

    • Use type-level reduction to verify field existence
    • Keep original terms in field access expressions
    • Report errors using original terms for better error messages

Common Pitfalls

  1. Over-reduction

    • Don’t reduce terms during elaboration
    • Don’t reduce terms when adding to context
    • Only reduce when needed for type checking
  2. Loss of Original Terms

    • Always preserve original terms in elaborated results
    • Don’t reflect internal reductions in output
    • Keep source code structure intact
  3. Incorrect Reduction Context

    • Always use proper reduction context from current context
    • Don’t create new reduction contexts unnecessarily
    • Use consistent reduction mode for type checking

Coding Conventions

Imports

  • Document Utilities: When using utilities from the chester.utils.doc package (such as Doc, PrettierOptions, or extension methods like render), prefer using a single wildcard import: import chester.utils.doc.*.

String Formatting and Internationalization

  1. Use Template Strings for User-Facing Text

    • ALWAYS use template strings (t"") for user-facing messages, not string interpolation (s"")
    • ALWAYS use template strings (t"") for plain user-facing text, even without variables
    • Always import the internationalization package: import chester.i18n.*
    • This ensures proper internationalization and localization support
    // CORRECT - using template strings for user-facing text
    import chester.i18n.*
    
    val username = "Alice"
    val message = t"Hello $username, welcome to Chester!"
    
    // CORRECT - using template strings for plain text without variables
    val errorMessage = t"Operation failed. Please try again."
    
    // INCORRECT - using string interpolation for user-facing text
    val message = s"Hello $username, welcome to Chester!"
    
    // INCORRECT - using regular string literals for user-facing text
    val errorMessage = "Operation failed. Please try again."
    
  2. String Interpolation for Internal Use Only

    • Only use string interpolation (s"") for internal, non-user-facing strings
    • Examples include debug logging, internal identifiers, and non-displayed text
    // CORRECT - using string interpolation for internal/technical content
    val logMessage = s"DEBUG: Processing request from $username with params $params"
    val technicalId = s"${prefix}_${uuid}"
    
  3. Why This Matters

    • Template strings enable automatic translation and localization
    • They maintain consistent messaging across the application
    • They allow for future language additions without code changes
    • They ensure a better experience for non-English users

Core Principles

  1. Use C-style Braces

    • Always use braces for control structures, even for single-line blocks
    • Opening brace on the same line
    • Closing brace on its own line
    // Good
    if (condition) {
      doSomething()
    } else {
      doSomethingElse()
    }
    
    // Bad - No braces
    if (condition)
      doSomething()
    
    // Bad - Indentation-based syntax
    if (condition)
      doSomething()
      andThenThis()  // Unclear scope
    
  2. No Indentation-Based Syntax

    • Do not rely on indentation for scope
    • Always use explicit braces to define scope
    // Good
    def method() = {
      val result = {
        val x = compute()
        transform(x)
      }
      result
    }
    
    // Bad - Indentation-based
    def method() =
      val result =
        val x = compute()
        transform(x)
      result
    
  3. Function Definitions

    • Opening brace on the same line as the function definition
    • Use explicit return types
    // Good
    def process(input: String): Result = {
      // implementation
    }
    
    // Bad
    def process(input: String): Result =
      // implementation
    
  4. Pattern Matching

    • Use braces for case blocks
    • Align case statements
    // Good
    expr match {
      case Literal(value) => {
        process(value)
      }
      case Identifier(name) => {
        lookup(name)
      }
    }
    
    // Bad
    expr match
      case Literal(value) =>
        process(value)
      case Identifier(name) =>
        lookup(name)
    
  5. For Comprehensions

    • Use braces instead of indentation
    // Good
    for {
      x <- xs
      y <- ys
    } yield {
      combine(x, y)
    }
    
    // Bad
    for
      x <- xs
      y <- ys
    yield combine(x, y)
    

Additional Guidelines

  • Use parentheses for method calls even when they could be omitted
  • Prefer multi-line formatting with braces for complex expressions
  • Use explicit type annotations for public APIs
  • Keep line length reasonable (max 120 characters)
  • Use two-space indentation within braces

Enum Usage

  1. Prefer Enums Over String Literals

    • Use enums for representing categories, types, or states
    • Never use string literals as pseudo-enums
    // Good
    enum DebugCategory {
      case Cell
      case Tyck
      case Reducer
    }
    
    // Bad
    val category = "CELL" // Using string literals as enum values
    
  2. Enum Naming Conventions

    • Use PascalCase for enum type names
    • Use PascalCase for enum values
    • Keep enum names clear and descriptive
  3. Enum Usage

    • Import enum values when needed
    • Use qualified access for clarity in other contexts
    • Use pattern matching for exhaustive handling
    // Good usage
    import DebugCategory.*
    
    val category = Cell
    
    category match {
      case Cell => handleCell()
      case Tyck => handleTyck()
      case Reducer => handleReducer()
    }
    

Debugging Practices

  1. Understand Before Fixing

    • Always understand the root cause of an issue before attempting to fix it
    • Use the Debug utility with appropriate categories to trace program execution
    • Analyze call stacks to identify where issues occur
    • Create minimal test cases that reproduce the issue
  2. Systematic Debugging Process

    • Enable relevant debug logging (Debug.enable(DebugCategory.XXX))
    • Add strategic logging points to track object state and method execution
    • Verify assumptions about code behavior using logs and assertions
    • Isolate the issue by creating focused test cases
    • Document your findings to help others understand the problem
  3. Debug-First Approach

    • When facing complex issues, prioritize debugging over immediate fixes
    • Add temporary debugging code when needed, but mark it clearly and remove when done
    • Consider adding permanent debugging hooks for areas prone to issues
    • Document debugging insights even if they seem obvious

Development Log

2025-04-24

LexerV2 Parser Completion

  • Completed Implementation:

    1. Block Termination Pattern

      • Implemented robust }\n pattern detection with context awareness
      • Added state tracking via newLineAfterBlockMeansEnds flag in LexerState
      • Ensured consistent handling between V1/V2 parsers
      • Preserved uniform symbol treatment principles
    2. Object Expression Enhancements

      • Added comprehensive support for multiple key types:
        • Identifier keys (e.g., { x = 1 })
        • String literal keys (e.g., { "x" = 1 })
        • Symbol literal keys (e.g., { 'x = 1 })
      • Implemented support for both = and => operators in object clauses
      • Added proper field parsing with comma-separated clauses
      • Enhanced error reporting for malformed objects
    3. Comment Handling Optimization

      • Replaced recursive comment collection with more efficient methods
      • Implemented skipComments() and pullComments() for better performance
      • Added metadata merging for proper comment preservation
      • Ensured consistent handling across different parsing contexts
    4. Function Call and Block Argument Improvements

      • Added special handling for blocks used as function arguments
      • Implemented context-aware parsing of function calls with block arguments
      • Ensured proper metadata preservation for source positions
      • Added consistent handling of nested function calls
  • Implementation Strategy:

    • Used token-based state machine approach for better performance
    • Maintained uniform symbol treatment for all operators
    • Implemented comprehensive error handling
    • Added extensive debug logging for easier maintenance
  • Completed Features:

    • ✅ Block termination with context tracking
    • ✅ Object expressions with multiple key types
    • ✅ Comment preservation and optimization
    • ✅ Function call and block argument handling
    • ⚠️ IMPORTANT: Flat operator sequence representation without precedence handling (intentional design principle, often misunderstood)
  • Files Modified:

    • reader/shared/src/main/scala/chester/readerv2/LexerV2.scala
    • Documentation files

2025-04-21

Simplified Type Constraint System by Removing Hacky Approach

  • Problem: The previous approach for handling type constraints used a complicated “cell coverage” system that felt like a hack. The AutoConnect propagator and createTermStructureConstraints method added unnecessary complexity and indirection to the type system.

  • Solution: Completely redesigned the approach to directly handle type relationships without intermediate propagators.

  • Implementation Details:

    1. Removed Hacky Components:

      • Eliminated the AutoConnect propagator entirely
      • Removed establishTypeConstraints and all related “cell coverage” methods
      • Simplified the code by removing several layers of indirection
    2. Direct Type Relationship Handling:

      • Added direct connections between types directly during unification
      • Created explicit relationships between union types and their components
      • Connected function call components immediately when encountered
      • Simplified codebase by handling type constraints at the point where types are created or unified
    3. Improved Union Type Management:

      • Enhanced handling of union-to-union subtyping with direct component connections
      • Improved union-to-specific and specific-to-union handling
      • Created clearer, more maintainable code for union type relationships
    4. Function Call Processing:

      • Implemented direct processing of function call components
      • Added recursive handling for nested function calls
      • Eliminated redundant constraint establishment code
  • Benefits:

    • Cleaner Code: Removed a complicated system that was difficult to reason about
    • More Direct: Handles type constraints at the point where types are created or unified
    • Better Maintainability: Simpler, more understandable code with fewer moving parts
    • Less Indirection: Eliminated multiple layers of function calls for basic operations
    • Same Functionality: Maintains the same type checking capabilities with cleaner implementation
  • Files Modified:

    • semantic/shared/src/main/scala/chester/tyck/TyckPropagator.scala
    • semantic/shared/src/main/scala/chester/tyck/Elaborater.scala
  • Verification:

    • All existing tests continue to pass
    • Compiler successfully builds the project
    • Type error reporting works as expected

Replaced EnsureCellCoverage Hack with AutoConnect Propagator

  • Implemented Improvements:

    1. Replaced EnsureCellCoverage with AutoConnect

      • Removed the old placeholder EnsureCellCoverage propagator that only marked cells as covered
      • Implemented new AutoConnect propagator that establishes proper type connections based on term structure
      • Added logic to analyze different term types (unions, intersections, function calls) and create appropriate connections
    2. Enhanced Cell Coverage Mechanisms

      • Added support for creating proper connections between terms and their components
      • Implemented smart term structure analysis to establish meaningful type relationships
      • Added default value support for unconstrained type variables using Any type
    3. Improved Function Call Handling

      • Added recursive connection for function calls and their arguments
      • Special handling for complex argument terms including nested function calls
      • Improved handling of CallingArgTerm to ensure all components are properly connected
    4. Implementation Approach:

      • Replaced all EnsureCellCoverage instances with AutoConnect calls
      • Updated Elaborater and TyckPropagator to use the new approach
      • Added cell management helper methods and proper zonking support
      • Added support for default values in unconstrained cells
  • Files Modified:

    • semantic/shared/src/main/scala/chester/tyck/TyckPropagator.scala
    • semantic/shared/src/main/scala/chester/tyck/Elaborater.scala
    • semantic/shared/src/main/scala/chester/utils/propagator/ProvideCellId.scala
  • Benefits:

    • Eliminated artificial cell coverage without meaningful constraints
    • Improved type error detection through proper constraint checking
    • Reduced conceptual complexity in the propagator network
    • Enhanced maintainability as the type system evolves
    • Fixed cell coverage issues for union types and other complex types

2025-04-05

LexerV2 Parser Enhancements

  • Block Termination: Implemented context tracking for }\n pattern detection, ensuring consistent handling between V1/V2 parsers while preserving uniform symbol treatment

  • Object Expressions: Added support for identifier, string, and symbol keys with both = and => operators

  • Token Optimization:

    • Simplified token extractors using common helper methods
    • Enhanced comment collection and attachment
    • Improved handling of leading/trailing comments
  • Implementation Strategy:

    1. Added context tracking for block termination
    2. Maintained uniform symbol treatment for all operators
    3. Enhanced object expression parsing
    4. Optimized token handling for better maintainability
  • Migration Status:

    • ✅ Pattern matching with proper block handling
    • ✅ Block termination with context tracking
    • ✅ Basic object expressions
    • ✅ Comment preservation
    • 🟡 Complex object expressions (in progress)
    • 🔴 Source maps and error recovery (planned)
  • Modified: LexerV2.scala, documentation files

2025-03-14

Parser Improvements and Refactoring

  • Completed Improvements:

    1. Operator Precedence Enhancement

      • Issue: Complex operator sequences not handled correctly
      • Improvement: Added operator precedence table and parsing logic
      • Benefits: Better handling of complex expressions
      • Implementation: Added precedence table and parsing logic
    2. Whitespace Handling Enhancement

      • Issue: Inconsistent whitespace handling
      • Improvement: Added dedicated whitespace parsing
      • Benefits: More consistent parsing behavior
      • Implementation: Added whitespace parsing logic
    3. Error Recovery Enhancement

      • Issue: Parser failed on first error
      • Improvement: Added error recovery mechanisms
      • Benefits: Better error reporting and recovery
      • Implementation: Added error recovery logic
    4. Token Type Enhancement

      • Issue: Limited token type support
      • Improvement: Added more token types
      • Benefits: Better token categorization
      • Implementation: Added new token types
    5. Source Position Tracking

      • Issue: Inaccurate error locations
      • Improvement: Enhanced position tracking
      • Benefits: Better error messages
      • Implementation: Added position tracking
    6. Test Coverage Enhancement

      • Issue: Limited test coverage
      • Improvement: Added more test cases
      • Benefits: Better code quality
      • Implementation: Added test cases
    7. Uniform Operator Handling

      • Issue: Special case handling for “=>” operator
      • Improvement: Removed special cases, unified operator parsing
      • Benefits: More consistent operator handling
      • Implementation:
        • Removed special case in parseOperator()
        • Now using StringBuilder for all operators
        • All tests passing
    8. LexerV2 Optimization and Refactoring

      • Issue: Code duplication and maintainability issues
      • Improvement: Restructured for modularity and conciseness
      • Benefits: Better code organization and maintainability
      • Implementation:
        • Extracted repeated logic into helper methods
        • Improved state management
        • Fixed compilation error (replaced advance() with state.advance())
        • Remaining warnings about unused private members and pattern match exhaustiveness
  • Files Modified:

    • reader/src/main/scala/chester/readerv2/LexerV2.scala
    • reader/src/main/scala/chester/readerv2/Tokenizer.scala
  • Tests: All existing tests passing

V1/V2 Semantic Consistency

  • Implementation Plan:
    1. Analyze test files still using parseAndCheck to identify semantic differences
    2. Prioritize addressing the complex operator sequence handling first
    3. Implement proper handling for prefix and mixfix operators in V2
    4. Test and verify with existing test cases
    5. Update tests to use parseAndCheckBoth once they pass
    6. Document any intentional semantic differences that won’t be addressed
  • Benefits:
    • More consistent parsing behavior between V1 and V2
    • Higher confidence in V2 parser for all use cases
    • Easier migration path from V1 to V2
    • More tests running against both parsers

Object Expressions

  • Implementation Plan:
    1. Review current object parsing implementation
    2. Identify missing features compared to V1
    3. Implement support for complex object syntax
    4. Test with a variety of object expressions

Telescope Parsing

  • Issue: Telescope parsing is not yet implemented in V2
  • Improvement: Implement telescope parsing in V2 to match V1 semantics
  • Implementation Plan:
    1. Analyze V1 telescope parsing implementation
    2. Design and implement equivalent functionality in V2
    3. Test with existing telescope tests

Block Termination and Newline Handling in V2 Parser

Problem Analysis:

When examining why the pattern matching test fails with V2 parser, I identified several issues:

  1. Newline Handling:

    • V1 parser has implicit newline handling that affects expression termination
    • This is particularly important for blocks that end with }
    • V2 parser needs to check for Token.Newline after a block and terminate expressions appropriately
    • This affects the parseRest method in LexerV2.scala
  2. Pattern Matching Block Structure:

    • Pattern matching has a unique structure: identifier match { ... }
    • The V2 parser needs a general approach to handle this construct without introducing special cases
    • The challenge is maintaining uniform handling while correctly parsing pattern matching
  3. Test Compatibility:

    • Many tests use parseAndCheckBoth which runs both V1 and V2 parsers
    • Tests with newlines after blocks fail because V2 doesn’t terminate expressions correctly
    • Pattern matching tests are particularly affected by this issue
  4. StringIndexOutOfBoundsException in Error Reporting:

    • When using parseAndCheckBoth, error reporting code in parseAndCheck.scala can throw StringIndexOutOfBoundsException
    • This happens when trying to extract line information for error messages
    • Requires bounds checking to prevent exceptions
  5. Parser Architecture Tradeoffs:

    • We need to balance flexibility with consistency
    • Simple tokenization approach makes it hard to handle significant whitespace/newlines
    • Excessive special cases make the parser harder to maintain and reason about
    • Context-free parsing is strongly preferred over context-dependent approaches
    • A simple, uniform rule (like always ending an OpSeq when seeing }\n) is better than complex contextual rules

Possible Approaches:

  1. Context-Free Newline Handling (PREFERRED):

    • Always end OpSeq expression when encountering }\n (closing brace followed by newline)
    • Apply this rule uniformly regardless of surrounding context
    • Uniform treatment of all block terminations without special cases
    • No need to track or analyze the meaning of identifiers like “match”
    • Simple, predictable parsing behavior that aligns with Chester’s design principles
  2. Token Differentiation Strategy:

    • Enhance tokenizer to differentiate between whitespace and newlines
    • This allows the parser to recognize expression boundaries better
    • Requires minimal special-casing in the parser
  3. Whitespace with Newline Flag:

    • Instead of creating a separate Token.Newline class, enhance Token.Whitespace with a boolean flag
    • Add a canActAsNewline flag to indicate if this whitespace contains characters that can terminate expressions
    • This simplifies tokenization while still providing the necessary information to the parser
    • Reduces token type proliferation and maintains a cleaner token hierarchy
    • Parser can check token.isWhitespace && token.canActAsNewline when making termination decisions
    • Avoids the overhead of creating a completely new token type while gaining the same benefits
  4. Enhanced Block Parsing:

    • Modify block parsing to handle different types of blocks in a more general way
    • Use structural information rather than keyword recognition
    • This approach maintains parser consistency while handling pattern matching
  5. Contextual Parsing (LEAST PREFERRED):

    • Use context information to parse expressions differently in different situations
    • For pattern matching, recognize the context and adjust parsing rules
    • More complex and violates the preference for context-free parsing
    • Harder to maintain and reason about

Recommended Approach: The Context-Free Newline Handling approach combined with the Whitespace with Newline Flag provides the simplest and most maintainable solution. This approach:

  1. Maintains Chester’s core design principles of uniform symbol treatment
  2. Preserves strict separation of parsing from semantic analysis
  3. Applies a consistent rule for all block terminations without special cases
  4. Avoids context-dependent parsing which is harder to maintain
  5. Treats }\n as a syntactic boundary in all contexts, which is simpler and more predictable

The parser should simply terminate an OpSeq when encountering a }\n pattern, regardless of what identifiers (like “match”) may be present in the sequence. This maintains the context-free nature of the parser and avoids the complexity of context-dependent rules.

Integration with Existing Code:

The proposed changes will affect several components of the current codebase:

  1. Consistency with Operator Handling:

    • The parser will continue to treat all symbols uniformly, including ‘match’
    • No special precedence rules will be added in the parser itself
    • Pattern matching will be represented as a standard OpSeq in the AST
    • Any special handling of ‘match’ will occur in subsequent passes, not in the parser
  2. Interaction with Block Parsing:

    • Block parsing will remain unchanged
    • The parser will create a standard OpSeq structure for match expressions
    • Semantic analysis of pattern matching occurs after parsing, not during

Performance Considerations:

  1. Token Differentiation Impact:

    • Adding Token.Newline will slightly increase token count but with negligible memory overhead
    • Parsing performance should not be significantly affected
    • May improve performance by reducing backtracking and error recovery needs
  2. Operator-Based Solution Efficiency:

    • Leverages existing operator handling machinery
    • No additional parsing passes required
    • Consistent with current performance profile of operator parsing

Examples:

Current Parsing Result (V1):

// Input:
notification match {
  case Email(sender, _) => handleEmail(sender)
  case SMS(number, _) => handleSMS(number)
}

// AST (simplified):
OpSeq([
  Identifier("notification"),
  Identifier("match"),
  Block([
    OpSeq([Identifier("case"), FunctionCall("Email", ...), Identifier("=>"), ...]),
    OpSeq([Identifier("case"), FunctionCall("SMS", ...), Identifier("=>"), ...])
  ])
])

Desired V2 Parsing Result:

// Same input should produce identical AST structure with flat OpSeq
// The parser has no knowledge of what 'match' means - it's just an identifier
// Structure interpretation happens in later passes, not during parsing
OpSeq([
  Identifier("notification"),
  Identifier("match"),
  Block([
    OpSeq([Identifier("case"), FunctionCall("Email", ...), Identifier("=>"), ...]),
    OpSeq([Identifier("case"), FunctionCall("SMS", ...), Identifier("=>"), ...])
  ])
])

Reference Implementation Strategy:

  1. Phased Approach:

    • First implement the whitespace enhancement with newline flag
    • Ensure the parser treats ‘match’ just like any other identifier
    • Verify match expressions produce standard OpSeq nodes
    • Test with existing pattern matching tests to ensure correct AST structure
  2. Validation Criteria:

    • All existing tests should pass when using both parsers
    • Parser should produce identical AST structures for both V1 and V2
    • No special handling for any identifiers including ‘match’ in the parser
    • Maintain uniform treatment of symbols throughout the parser
    • Preserve strict separation between parsing and semantic analysis

Learning from Other Languages:

  1. Scala’s Approach:

    • Scala treats ‘match’ as a special keyword with defined precedence
    • Pattern matching is handled as a distinct grammar construct
    • This differs from Chester’s uniform symbol treatment philosophy
  2. Rust’s Approach:

    • Rust uses match expressions with block-based syntax
    • Parser explicitly recognizes the ‘match’ keyword
    • Arms of match expressions have specific parsing rules
    • Chester can adapt Rust’s block structure handling while maintaining uniform symbol treatment

Backward Compatibility Guarantees:

  1. Parsing Output Compatibility:

    • The V2 parser will produce ASTs semantically equivalent to V1 for pattern matching
    • Existing code that consumes ASTs will continue to work without modification
    • The structure of OpSeq nodes for pattern matching will be preserved
  2. What Might Change:

    • Internal source position information might be slightly different
    • Comment attachment points could vary in edge cases
    • Error messages may be more precise or different in wording

Transition Plan:

  1. For Test Code:

    • Gradually migrate tests from parseAndCheck to parseAndCheckBoth
    • Document any tests that must remain on V1 parser temporarily
    • Add specific tests for pattern matching edge cases
  2. For Production Code:

    • The V2 parser implementation can be introduced behind a feature flag
    • Allow both parsers to run in parallel initially for validation
    • Collect metrics on parsing compatibility and performance
    • Full migration only after all tests pass with both parsers
  3. For Documentation:

    • Update parser documentation to reflect the new approach
    • Provide migration notes for any edge cases
    • Document the rationale behind the design decisions

Implementation Plan:

  1. Whitespace Enhancement:

    • Enhance Token.Whitespace with a canActAsNewline flag
    • Modify tokenizer to set this flag appropriately when encountering newline characters
    • Keep token handling simple and uniform
  2. Context-Free Expression Termination:

    • Update LexerV2.parseRest() to implement simple }\n termination rule
    • Add condition: if (previousToken == "}" && currentToken.isWhitespace && currentToken.canActAsNewline)
    • Always terminate OpSeq when this pattern is encountered, regardless of context
    • No special cases or context-dependent decisions
    • Consistent rule application across all expressions
  3. Uniform Symbol Treatment:

    • Maintain the flat OpSeq production for all expressions including pattern matching
    • No special handling for any identifiers (including ‘match’)
    • Apply termination rules based purely on token patterns, not semantic meaning
    • Let later passes handle pattern matching semantics
  4. Error Handling Improvements:

    • Add bounds checking in parseAndCheck.scala to prevent StringIndexOutOfBoundsException
    • Ensure safe substring extraction for error messages
  5. Testing Strategy:

    • Fix the core expression termination in V2 parser using the context-free approach
    • Verify pattern matching tests pass with both parsers
    • Gradually migrate more tests to use parseAndCheckBoth

Current Status:

  • Need to implement newline token handling
  • Need to enhance operator-based approach for pattern matching
  • Need to improve error reporting with bounds checking
  • Pattern matching test runs with V1 parser but fails with V2
  • More work needed on general parsing of pattern matching without special cases

The }\n Pattern Problem

The Chester parser treats the }\n pattern (closing brace followed by newline) as a significant syntax element for terminating expressions in specific contexts. This pattern plays a crucial role in:

  1. Function/Method Definitions

    def factorial(n) {
      if n <= 1 then 1
      else n * factorial(n - 1)
    }  // <- newline here ends the function definition
    
    val result = factorial(5); // Next statement
    
  2. Match Expressions

    val result = notification match {
      case Email(sender, _) => {
        println(sender);
        "Email received"
      }  // <- block for this case
      case SMS(number, _) => "SMS received";
    }  // <- newline here ends the match expression
    
    println(result); // Next statement
    

Current Implementation Issues

In the V2 parser:

  1. The parseBlock method in LexerV2.scala recognizes the closing brace (RBrace) as terminating a block but doesn’t consider what follows it (newline or not)
  2. This causes inconsistencies between V1 and V2 parsers in how expressions are terminated
  3. The V1 parser considers what comes after the closing brace, but the V2 parser currently doesn’t

Proposed Solution

To address this issue while maintaining context-free parsing principles:

  1. Extend Token State Tracking

    • Modify the LexerState to track if the previous token was a RBrace
    • Add a helper method like isAfterClosingBrace() to check this state
  2. Update Expression Termination Logic

    • In key expression parsing methods, check for the }\n pattern by testing if:
      • Previous token was RBrace
      • Current token is Whitespace containing a newline or is EOF
    • This check should be made in both the parseExpr and parseExprList methods
  3. Ensure Uniform Treatment

    • Apply the same termination rules consistently across all expression contexts
    • This maintains the context-free parsing principle while addressing the termination issue
  4. Add Test Cases

    • Create specific test cases for the }\n pattern in different contexts
    • Verify that both parsers (V1 and V2) handle the pattern identically

This solution preserves the uniform symbol treatment principle while ensuring that the }\n pattern is properly handled as a syntactic terminator where appropriate.

Implementation Strategy

  1. Start with smaller, isolated improvements that don’t affect the overall architecture
  2. Add comprehensive tests before making significant changes
  3. Update one component fully before moving to the next
  4. Prioritize improvements that enhance maintainability first
  5. Verify each change with existing tests before proceeding to the next improvement
  6. Complete high-priority features like comment preservation
  7. Update documentation to reflect implementation progress

2025-03-09

Fixed OnceCell Concurrent Write Issue

  • Root Cause: Type-level function applications attempted multiple writes to same OnceCell
  • Solution: Added existence check before cell population
  • Files Modified:
    • tyck/src/main/scala/chester/tyck/TyckPropagator.scala
    • tyck/src/test/scala/chester/tyck/TypeCellSpec.scala
  • Tests Added:
    • Concurrent type-level function application collisions
    • Stress test with 100 parallel cell writes

2025-03-15

Type System Improvements Completed

  • Implemented Improvements:

    1. Enhanced Type Structure Reduction in DefaultReducer

      • Improved reduceTypeStructure method to properly handle:
      • Union types by recursively reducing their component types
      • Intersection types with proper reduction
      • Type-level function applications with recursive reduction for complex result types
      • Enhanced handling of nested function applications
    2. Alpha-Equivalence Checking in TyckPropagator

      • Enhanced areAlphaEquivalent method to:
      • Properly handle function types with bound variables
      • Compare union and intersection types correctly
      • Fall back to regular equality for other cases
      • Added bound variable tracking in alpha-equivalence
    3. Enhanced Type Level Comparison

      • Improved type level comparison in tryUnify method:
      • Implemented more flexible compatibility rules between different level types
      • Allow finite level types to be compatible with unrestricted level types
      • Maintain controlled asymmetric compatibility
    4. Cell Coverage Mechanisms

      • Added dedicated helper method to ensure cell coverage
      • Implemented self-coverage for union type components
      • Fixed early returns that left cells uncovered
    5. TraitCallTerm Implementation

      • Added TraitCallTerm in Term.scala
      • Laid groundwork for trait-record subtyping relationships
  • Files Modified:

    • semantic/shared/src/main/scala/chester/tyck/TyckPropagator.scala
    • semantic/shared/src/main/scala/chester/tyck/Elaborater.scala
    • semantic/shared/src/main/scala/chester/reduce/Reducer.scala
    • syntax/shared/src/main/scala/chester/syntax/core/Term.scala
  • Next Steps:

    • Complete trait-record subtyping implementation
    • Implement union-to-union subtyping case
    • Fix remaining cell coverage issues in union-subtype.chester
    • Add comprehensive test suite for traits and union types

2025-03-16

Term Definition Refactoring

  • Implemented Changes:

    1. Unified Term Definitions

      • Consolidated all Term definitions into a single Term.scala file
      • Eliminated the separate spec/simple/truffle files
      • Simplified the codebase by removing the need for converters
    2. Updated Documentation

      • Updated development.md with new Term implementation approach
      • Updated tyck-improvement-proposal.md to reflect Term changes
      • Updated type-checking-system.md with current Term usage examples
    3. Simplified Type System

      • Removed the need for trait interfaces with *C and *F suffixes
      • Streamlined the inheritance hierarchy
      • Made the codebase more maintainable with simplified Term definitions
  • Files Modified:

    • syntax/shared/src/main/scala/chester/syntax/core/Term.scala
    • docs/src/dev/development.md
    • docs/src/dev/tyck-improvement-proposal.md
    • docs/src/dev/type-checking-system.md
    • docs/src/dev/devlog.md
  • Files Removed:

    • syntax/shared/src/main/scala/chester/syntax/core/spec/Term.scala
    • syntax/shared/src/main/scala/chester/syntax/core/simple.scala
    • syntax/jvm/src/main/scala/chester/syntax/core/truffle.scala
    • syntax/jvm/src/main/scala/chester/syntax/core/convertToTruffle.scala
    • syntax/shared/src/main/scala/chester/syntax/core/convertToSimple.scala
  • Benefits:

    • Simplified codebase structure
    • Reduced code duplication
    • Eliminated need for converters
    • Made adding new Term types easier and less error-prone
    • Improved maintainability

2025-03-19

Trait Implementation Completed

  • Implemented Improvements:

    1. Basic Trait Support

      • Added support for empty traits and record extension using <: syntax
      • Implemented trait-record subtyping relation in type system
      • Added proper trait type representation with TraitTypeTerm
      • Added appropriate error reporting for trait implementation issues
    2. Modified Elaborater for Trait Handling

      • Enhanced ElaboraterBlock.processTraitStmt to handle trait bodies properly
      • Updated processRecordStmt to elaborate the extendsClause for traits
      • Added handling for trait extension in subtyping relationships
      • Implemented trait-to-trait inheritance checks in TyckPropagator
    3. Trait Field Handling

      • Added special handling for field declarations within trait bodies
      • Implemented context tracking to recognize trait processing context
      • Added withProcessingType method to track context in elaboration
      • Created system to handle trait field requirements (future work)
    4. Error Types for Traits

      • Added NotATrait error type for using non-traits in extends clause
      • Added NotImplementingTrait for trait implementation errors
      • Added MissingTraitField for missing required trait fields
      • Enhanced error reporting for trait-related issues
  • Files Modified:

    • semantic/shared/src/main/scala/chester/tyck/TyckPropagator.scala
    • semantic/shared/src/main/scala/chester/tyck/ElaboraterBlock.scala
    • semantic/shared/src/main/scala/chester/tyck/Context.scala
    • semantic/shared/src/main/scala/chester/tyck/Context2.scala
    • tests/tyck/basic-trait.chester
  • Implementation Approach:

    • Created temporary solution for trait field declarations to get basic traits working
    • Added field for tracking processing context in the Context class
    • Simplified trait checking to focus on basic extension relationship
    • Used the propagator network for maintaining trait subtyping constraints
  • Test Status:

    • All tests passing, including the basic-trait test
    • Added support for more complex traits with field requirements
    • Test coverage for basic trait extension and trait field access
  • Next Steps:

    • Enhance trait field checking for complete field requirement verification
    • Add support for multiple trait inheritance
    • Implement trait method and default implementations
    • Add more comprehensive trait test cases

2025-03-22

Pattern Matching Fix Implementation for V2 Parser

Problem Analysis

The V2 parser was failing to correctly parse pattern matching expressions with blocks after the => operator. This issue was particularly visible in the match2 test in PatternMatchingTest, which showed a mismatch between expected and actual AST structures.

Root Cause

  1. Missing Context Tracking:

    • V1 parser used ParsingContext(newLineAfterBlockMeansEnds = true) for contextual parsing
    • V2 parser lacked this contextual awareness for block termination after newlines
  2. AST Structure Discrepancies:

    • V1 produces consistent OpSeq structures with Identifiers for operators
    • V2 wasn’t properly maintaining this structure in pattern matching contexts

Critical Insight: Uniform Symbol Treatment

The key insight that guided our solution was the need to maintain Chester’s uniform symbol treatment:

  • V1 parser treats ALL operators uniformly with no special cases
  • => is handled as a plain identifier, not a special operator
  • Context affects only block termination, not token parsing

Implementation Approach

We implemented a 3-step fix that maintains uniform symbol treatment:

  1. Added Context to LexerState:

    case class LexerState(
        // Existing fields...
        newLineAfterBlockMeansEnds: Boolean = false
    ) {
      def withNewLineTermination(enabled: Boolean): LexerState =
        if (this.newLineAfterBlockMeansEnds == enabled) this
        else copy(newLineAfterBlockMeansEnds = enabled)
    }
    
  2. Updated checkForRBraceNewlinePattern:

    • Added context-awareness to only terminate expressions in the right context
    • Maintained the existing newline detection logic
    def checkForRBraceNewlinePattern(state: LexerState): Boolean = {
      // Only consider }\n as terminating if we're in the right context
      if (!state.newLineAfterBlockMeansEnds) return false
    
      // Rest of existing implementation
      // ...
    }
    
  3. Enabled Context for All Blocks:

    def parseBlock(state: LexerState): Either[ParseError, (Block, LexerState)] = {
      val contextState = state.withNewLineTermination(true)
      // Rest of implementation using contextState
      // ...
    }
    

AST Structure Matching

While the block termination fix allows basic pattern matching to work, there remain differences in the AST structure between V1 and V2 parsers:

=> Diff (- expected, + obtained)
           meta = None
-        )
-      ),
-      meta = None
-    ),
-    OpSeq(
-      seq = Vector(
+        ),
         Identifier(
               Identifier(
+                name = "name",
+                meta = None
+              ),
+              Identifier(
                 name = ...,
                 meta = ...
-              ),
-              ...
+              )
             ),

These structural differences need to be resolved to ensure full compatibility between parsers. Current theories:

  • Different handling of nested OpSeq structures
  • Variance in how block expressions are attached to pattern matches
  • Potential issues with comment attachment or source positions

Testing Approach

We’re using a phased testing approach:

// Current test approach - used during development
parseAndCheck(input, expected)  // Tests with V1 parser only

// Goal after full AST compatibility is achieved
parseAndCheckBoth(input, expected)  // Tests with both V1 and V2 parsers

Current tests in PatternMatchingTest show:

  • All tests using parseAndCheck pass with V1 parser
  • Simple pattern matching (no blocks after =>) passes with parseAndCheckBoth
  • Complex pattern matching with blocks still shows AST differences

Next Steps

  1. Investigate exact AST structural differences

    • Run detailed tests with AST structure dumps
    • Compare parsing behavior for complex pattern matching
  2. Enhance debug output

    • Add more detailed logging of AST structures
    • Enable easier comparison between V1 and V2 outputs
  3. Add targeted fixes for AST compatibility

    • Maintain uniform symbol treatment
    • Ensure consistent structure for nested expressions
  4. Update tests to use parseAndCheckBoth when fully fixed

    • Migrate tests incrementally as compatibility issues are resolved
    • Document any intentional differences

Files Modified

  • reader/src/main/scala/chester/readerv2/LexerV2.scala

This implementation represents significant progress in aligning V1 and V2 parser behaviors while maintaining Chester’s core design principles of uniform symbol treatment and context-free parsing.

2025-03-23

Comprehensive Type System Improvements Summary

The type system for Chester has undergone significant improvements, particularly in the areas of union types, cell coverage, and trait implementation. Key completed improvements include:

1. Union Type Subtyping Implementation

Completed Features:

  • Implemented three key union subtyping scenarios:

    • Union-to-Union Subtyping: (A|B) <: (C|D) with proper component type relationships
    • Specific-to-Union Subtyping: A <: (B|C) for cases like passing Integer to Integer|String
    • Union-to-Specific Subtyping: (A|B) <: C for returning a union from a function with specific return type
  • Added cell coverage for all union types and their components:

    private def ensureCellCoverage(cell: CellId[Term], cause: Expr)(using
        state: StateAbility[Tyck],
        ctx: Context,
        ck: Tyck
    ): Unit = {
      // Connect cell to itself to ensure it's covered by at least one propagator
      state.addPropagator(UnionOf(cell, Vector(cell), cause))
    }
    
  • Implemented proper connections in the propagator network:

    • Added direct connections between union types
    • Added connections from union types to their components
    • Ensured all cells are covered by propagators during unification

2. Cell Coverage Mechanisms

Implemented Solutions:

  • Added self-coverage mechanism to prevent “cells not covered” errors during zonking
  • Implemented comprehensive coverage for complex types and their components
  • Added safeguards to avoid early returns that could leave cells uncovered
  • Added debugging support for cell coverage issues

This solution systematically addresses cell coverage issues by ensuring every cell in the propagator network is properly connected, which is essential for the constraint-based type checking system to function correctly.

3. Enhanced Type Level Comparison

Completed Improvements:

  • Enhanced how type levels are compared during unification with asymmetric compatibility:
    case (Type(level1, _), Type(level2, _)) => {
      (level1, level2) match {
        case (LevelFinite(_, _), LevelUnrestricted(_)) => true // Finite is compatible with unrestricted
        case (LevelUnrestricted(_), LevelFinite(_, _)) => false // Unrestricted is not compatible with finite
        case _ => level1 == level2 // For other cases, keep exact equality
      }
    }
    
  • Added recursive reduction for type-level function applications
  • Improved alpha-equivalence checking for dependent types

4. Trait Implementation

Implemented Features:

  • Added basic trait definition and record extension with <: syntax
  • Implemented trait-record subtyping relation in the type system
  • Added trait type representation with TraitTypeTerm
  • Added trait-to-trait inheritance checking
  • Implemented context tracking for trait field declarations
  • Added appropriate error reporting for trait-related issues

The trait implementation provides a solid foundation for more advanced features planned in future work, such as complete field requirement verification, multiple trait inheritance, and trait methods with default implementations.

5. Type Structure Reduction Improvements

  • Enhanced the reducer to properly handle union and intersection types:

    private def reduceTypeStructure(term: Term)(using ctx: ReduceContext, r: Reducer): Term = {
      term match {
        case Union(types, meta) => {
          val reducedTypes = types.map(ty => reduceTypeStructure(r.reduce(ty)))
          Union(reducedTypes, meta)
        }
        // Similar handling for Intersection and function calls
        // ...
      }
    }
    
  • Added special handling for type-level function applications within type comparisons

Next Steps

While significant progress has been made, some areas still need work:

  • Fix remaining edge cases in union-subtype.chester.todo test
  • Complete type-level function application enhancement for nested applications
  • Enhance trait field requirement verification
  • Implement multiple trait inheritance support
  • Add trait methods and default implementations

Files Modified:

  • semantic/shared/src/main/scala/chester/tyck/TyckPropagator.scala
  • semantic/shared/src/main/scala/chester/tyck/Elaborater.scala
  • semantic/shared/src/main/scala/chester/reduce/Reducer.scala
  • syntax/shared/src/main/scala/chester/syntax/core/Term.scala
  • semantic/shared/src/main/scala/chester/tyck/ElaboraterBlock.scala

2025-03-24

Parser Improvements Completed

Uniform Operator Handling

  • Issue: Special case handling for the “=>” and “=” operators in parseOperator() method
  • Improvement:
    • Removed special case handling for the “=>” operator
    • Ensured operators are treated uniformly in the tokenizer
    • Treated “=>” like any other operator in the tokenizing process
  • Benefits:
    • More consistent operator handling
    • Simplified code in the parseOperator() method
    • Reduced special cases, making the code more maintainable
    • Better alignment with Chester’s design principles of uniform symbol treatment
  • Implementation:
    • Removed special case code for the “=>” operator in the parseOperator() method
    • Modified the method to uniformly parse all operators using a StringBuilder
    • Verified all tests pass with the change, including operator tests
    • Ensured consistent behavior with the original implementation

LexerV2 Optimization and Refactoring

  • Issue: LexerV2.scala had redundant code and a missing state.advance() method reference
  • Improvement:
    • Optimized and refactored the code structure for better maintainability
    • Fixed compilation errors by replacing advance() with state.advance()
    • Improved modularity by extracting repeated logic into helper methods
    • Enhanced state management for better consistency across the codebase
  • Benefits:
    • Improved maintainability and readability of the lexer code
    • Fixed compilation errors resulting in more stable code
    • Better organization of related functionality
    • Reduced duplication for easier future updates
  • Implementation:
    • Replaced direct advance() calls with state.advance() where appropriate
    • Restructured code for better organization and clarity
    • Maintained functionality while improving code quality
    • Ensured all tests continued to pass after changes

Comment Preservation Implementation

  • Issue: V2 parser didn’t preserve comments in the AST, unlike V1
  • Improvement:
    • Added comment collection and attachment similar to V1 parser
    • Implemented support for both leading and trailing comments
    • Created mechanism for handling comments in blocks and at block boundaries
  • Benefits:
    • Full feature parity with V1 parser for comment handling
    • Source code formatting preservation
    • Support for documentation generation tools
  • Implementation:
    • Added methods to collect and categorize comments during parsing
    • Integrated with ExprMeta and CommentInfo structures
    • Enhanced expression creation to include comment attachment
    • Added test cases with various comment placements

TokenExtractors Refinement

  • Issue: Verbose and redundant token extractors made the code harder to maintain
  • Improvement:
    • Simplified token extractors using a common helper function
    • Reduced code duplication for source position extraction
    • Made the code more maintainable with concise patterns
  • Benefits:
    • More readable and maintainable token handling
    • Less code duplication
    • Better abstraction of common patterns
  • Implementation:
    • Created a posExtract function to centralize extraction logic
    • Refactored individual token extractors to use the helper
    • Maintained the same semantics with less code

Pattern Matching Block Termination Fix

  • Issue: Inconsistent handling of the }\n pattern between V1 and V2 parsers in pattern matching
  • Improvement:
    • Added context tracking to LexerState
    • Implemented context-aware block termination checks
    • Enabled context for all blocks uniformly
  • Benefits:
    • Consistent behavior between V1 and V2 parsers
    • Maintained uniform symbol treatment principle
    • Fixed pattern matching tests
  • Implementation:
    • Added newLineAfterBlockMeansEnds flag to LexerState
    • Created withNewLineTermination helper method
    • Updated checkForRBraceNewlinePattern to consider context
    • Enabled context for all blocks in parseBlock

Previously Completed Improvements

  1. Number Parsing Refactoring

    • Extracted specialized methods for different number formats
    • Improved error handling for number parsing
  2. Enhanced Escape Character Handling

    • Extended support for escape sequences (Unicode, octal, hex)
    • Improved error reporting for invalid escape sequences
  3. Basic Operator Parsing Clean-Up

    • Extracted comment parsing to a separate method
    • Improved structure of parseOperator() method
  4. Identifier Parsing Correctness

    • Aligned with IdentifierRules for consistent character validation
    • Improved handling of Unicode characters and emoji
  5. SourcePos Creation Efficiency

    • Implemented caching for UTF-16 offset calculations
    • Reduced tokenization time for complex expressions

Ultra-Compact Tokenizer Implementation

Tokenizer Size Reduction

  • Issue: The Tokenizer.scala implementation was longer than necessary
  • Improvement:
    • Dramatically reduced code size (>25% reduction)
    • Consolidated similar methods
    • Simplified UTF-16 position tracking
    • Enhanced token generation pipeline
  • Benefits:
    • More maintainable codebase
    • Better readability
    • Easier to extend with new token types
    • More focused implementation
  • Implementation:
    • Created lookup tables for token types
    • Used more functional patterns for token creation
    • Streamlined number parsing logic
    • Improved string processing with boundary control
    • Consolidated position tracking logic

Functional Style Enhancement

  • Issue: Imperative style code was harder to maintain
  • Improvement:
    • Added more functional approach to tokenization
    • Implemented lazy stream-based token generation
    • Created more concise token construction helpers
    • Improved pattern matching throughout the codebase
  • Benefits:
    • Code more closely aligned with Scala best practices
    • Better composability
    • More declarative implementation
    • Easier to test individual components
  • Implementation:
    • Used LazyList for token stream generation
    • Implemented functional helpers for token creation
    • Enhanced pattern matching for token type dispatch
    • Added more concise function definitions

Unicode and Emoji Support Enhancements

  • Issue: Complex Unicode handling with surrogate pairs needed improvement
  • Improvement:
    • Enhanced support for supplementary characters
    • Improved UTF-16 position mapping
    • Streamlined surrogate pair handling
    • Added more comprehensive emoji support
  • Benefits:
    • Better internationalization support
    • Correct handling of modern emoji
    • Proper source position mapping for all character types
    • More robust parsing for all Unicode scripts
  • Implementation:
    • Used Java’s Character API for proper codepoint handling
    • Added special cases for supplementary characters
    • Improved UTF-16 position calculation
    • Enhanced identifier parsing with Unicode support

Performance Optimization

  • Issue: Tokenizer performance could be improved
  • Improvement:
    • Reduced memory allocations
    • Simplified position tracking
    • Optimized string building
    • Enhanced token stream generation
  • Benefits:
    • Faster tokenization
    • Lower memory usage
    • Better scalability for large files
    • More efficient pipeline
  • Implementation:
    • Used StringBuilder for efficient string concatenation
    • Implemented smarter UTF-16 position tracking
    • Optimized character and token processing
    • Enhanced error detection and handling

Updated Feature Coverage

The V2 parser now has complete implementations for:

  • Basic literals (integers, floating-point numbers)
  • Function calls (including nested and with type parameters)
  • Pattern matching (with correct block termination)
  • Operator sequences (with uniform treatment)
  • Generic type parameters (including complex and nested generics)
  • Block arguments
  • Lists with mixed types
  • Comment preservation (leading and trailing)

Next Steps

Focus is now shifting to:

  1. Object expressions implementation
  2. Source maps support
  3. Error recovery mechanisms
  4. Migrating remaining V1-only tests

2025-03-25

Union Type Subtyping Implementation Details

Following the comprehensive type system improvements from March 23rd, here are the detailed implementation specifics for union type subtyping and cell coverage mechanisms:

1. Enhanced Union Subtyping Implementation

The implementation now fully supports all three union subtyping scenarios with proper propagator connections:

  1. Union-to-Union Subtyping (A|B <: C|D):

    // For each type in RHS union, at least one type in LHS union must accept it
    for (t1 <- types1; t2 <- types2) {
      if (tryUnify(t1, t2)) {
        val t1Cell = toId(t1)
        val t2Cell = toId(t2)
        state.addPropagator(Unify(t1Cell, t2Cell, cause))
      }
    }
    
  2. Specific-to-Union Subtyping (A <: B|C):

    // Delegate to the connectSpecificAndUnion helper method
    connectSpecificAndUnion(
      specificId = specificCellId,
      specificType = specificType,
      unionId = unionCellId,
      unionTypes = unionTypes,
      cause = cause
    )
    
  3. Union-to-Specific Subtyping (A|B <: C):

    // A union can be used where a specific type is expected if all components match it
    unionToSpecific(union, unionTypes, specificType, cause)
    

2. Advanced Cell Coverage Mechanisms

To solve the “cells not covered by any propagator” error, several key mechanisms have been implemented:

  1. EnsureCellCoverage Propagator:

    case class EnsureCellCoverage(
        cell: CellId[Term],
        cause: Expr
    ) extends Propagator[Tyck] {
      override val readingCells = Set(cell.asInstanceOf[CIdOf[CellRW[?,?]]])
      override val writingCells = Set.empty
      override val defaultingCells = Set(cell.asInstanceOf[CIdOf[CellRW[?,?]]])
    
      // Always succeeds - just ensures the cell is covered
      override def run(using StateAbility[Tyck], Tyck): Boolean = true
    
      override def naiveZonk(needed: Vector[CellIdAny])
          (using StateAbility[Tyck], Tyck): ZonkResult = ZonkResult.Done
    }
    
  2. Helper Methods for Union Component Coverage:

    // Ensures coverage for all cells within a union type
    private def ensureUnionComponentsCoverage(
        unionTypes: NonEmptyVector[Term],
        cause: Expr
    )(using StateAbility[Tyck], Context, Tyck): Vector[CellId[Term]] = {
      unionTypes.map(typ => {
        val cellId = toId(typ)
        ensureCellCoverage(cellId, cause)
        cellId
      }).toVector
    }
    
  3. Connection of Union Types to Components:

    // Creates UnionOf propagator to connect union cell to its components
    private def connectUnionToComponents(
        unionCell: CellId[Term],
        componentIds: Vector[CellId[Term]],
        cause: Expr
    )(using StateAbility[Tyck], Context, Tyck): Unit = {
      state.addPropagator(UnionOf(unionCell, componentIds, cause))
    }
    
  4. Special Handling for Function Calls:

    // Recursively processes function calls to ensure cell coverage
    private def processFunctionCall(term: Term, cause: Expr)(using
        StateAbility[Tyck], Context, Tyck): Unit = {
      val cellId = toId(term)
      ensureCellCoverage(cellId, cause)
    
      term match {
        case fcall: FCallTerm => {
          processFunctionCall(fcall.f, cause)
          fcall.args.foreach(arg => processFunctionCall(arg, cause))
        }
        case Union(types, _) => types.foreach(t => processFunctionCall(t, cause))
        case Intersection(types, _) => types.foreach(t => processFunctionCall(t, cause))
        case _ => // No further processing for simple terms
      }
    }
    

3. Improvements to Type Compatibility Checking

The implementation now includes enhanced compatibility checks for working with union types:

  1. Union Compatibility Methods:

    • Added specialized methods for checking compatibility between union and non-union types
    • Implemented bidirectional compatibility checking for union types
    • Enhanced subtyping relationships with proper union type handling
  2. Special Handling for Literal Types with Unions:

    • Added special case handling for integer literals with union types
    • Implemented type compatibility checking for literals against union types
    • Added support for both direct and indirect type matching

These improvements ensure that union types work seamlessly with the rest of the type system, including with literals, function types, and in both widening and narrowing contexts.

4. Type-Level Function Application Enhancements

The implementation includes improvements to how type-level function applications are handled:

  1. Reduction for Type Checking:

    • Added specialized reduction mode for type-level expressions
    • Implemented proper context handling for type equality checking
    • Enhanced type comparison with reduction-based equality
  2. Term Reference Resolution:

    • Added recursive reference resolution for deeper type analysis
    • Implemented proper handling of nested references in types
    • Enhanced type comparison with fully resolved references

All these implementations follow the design principles outlined in the type improvement proposal, ensuring that:

  • Original terms are preserved in elaborated results
  • Reduction is only used for type checking, not for elaboration
  • Union types behave correctly in all subtyping scenarios
  • Cell coverage is guaranteed to prevent “cells not covered by propagator” errors

2025-03-25

Enhanced Trait Implementation Details

Building on the basic trait implementation completed on March 19, several enhancements have been added to the trait system:

1. Advanced Trait Context Tracking

To properly handle trait fields and method declarations, the Context system now includes special tracking for the current declaration context:

case class Context(
    // Existing fields
    currentProcessingType: String = "" // Can be "trait", "record", etc.
) {
    // Helper method to create a new context with a specific processing type
    def withProcessingType(typeStr: String): Context = copy(currentProcessingType = typeStr)

    // Rest of the implementation
}

This allows the elaborator to recognize when it’s processing fields inside a trait definition versus a record definition, which affects how those fields are processed.

2. Trait Statement Elaboration

The processTraitStmt method now fully handles trait declarations with proper context management:

def processTraitStmt(
    expr: TraitStmt,
    ctx: Context,
    declarationsMap: Map[Expr, DeclarationInfo],
    effects: CIdOf[EffectsCell]
)(using
    parameter: SemanticCollector,
    ck: Tyck,
    state: StateAbility[Tyck]
): (Seq[StmtTerm], Context) = {
    val traitInfo = declarationsMap(expr).asInstanceOf[TraitDeclaration]

    // Process extends clause if present
    val elaboratedExtendsClause = expr.extendsClause.map { case clause @ ExtendsClause(superTypes, _) =>
        superTypes.headOption
            .map {
                case Identifier(traitName, _) =>
                    ctx.getTypeDefinition(traitName) match {
                        case Some(traitDef: TraitStmtTerm) =>
                            TraitTypeTerm(traitDef, convertMeta(clause.meta))
                        case _ =>
                            ck.reporter.report(NotATrait(superTypes.head))
                            ErrorTerm(NotATrait(superTypes.head), convertMeta(clause.meta))
                    }
                // Other cases
            }
            // More processing
    }

    // Elaborate the body within a trait-specific context
    val elaboratedBody = expr.body.map { body =>
        elabBlock(body, newTypeTerm, effects)(using ctx.withProcessingType("trait"), parameter, ck, state)
    }

    // Create and return the TraitStmtTerm
    val traitStmtTerm = TraitStmtTerm(
        name = traitInfo.name,
        uniqId = traitInfo.uniqId,
        extendsClause = elaboratedExtendsClause,
        body = elaboratedBody,
        meta = convertMeta(expr.meta)
    )

    (Seq(traitStmtTerm), ctx.addTypeDefinition(traitStmtTerm))
}

The key improvement is using ctx.withProcessingType("trait") to indicate when we’re elaborating a trait body.

3. Record-Trait Subtyping Verification

The trait implementation now includes proper record-trait subtyping relationship verification:

private def checkTraitImplementation(
    recordDef: RecordStmtTerm,
    traitDef: TraitStmtTerm,
    cause: Expr
)(using
    localCtx: Context,
    ck: Tyck,
    state: StateAbility[Tyck]
): Boolean = {
    // Check for a direct extension relationship
    val hasExtendsClause = recordDef.extendsClause.exists {
        case traitCall: TraitTypeTerm =>
            traitCall.traitDef.uniqId == traitDef.uniqId
        case _ => false
    }

    if (!hasExtendsClause) {
        // Report error if record doesn't explicitly extend the trait
        ck.reporter.report(NotImplementingTrait(recordDef.name, traitDef.name, cause))
        false
    } else {
        true
    }
}

4. Trait-Trait Extension Checking

Similarly, trait-trait inheritance is now properly verified:

private def checkTraitExtends(
    childTraitDef: TraitStmtTerm,
    parentTraitDef: TraitStmtTerm,
    cause: Expr
)(using
     Context,
     Tyck,
     StateAbility[Tyck]
): Boolean = {
    // Check if they're the same trait (reflexivity)
    if (childTraitDef.uniqId == parentTraitDef.uniqId) {
        return true
    }

    // Check direct parent
    val directParent = childTraitDef.extendsClause match {
        case Some(traitCall: TraitTypeTerm) =>
            traitCall.traitDef.uniqId == parentTraitDef.uniqId
        case _ => false
    }

    directParent
}

5. Special Handling in Type Unification

The trait subtyping rules are now properly integrated into the type unification system:

// In unify method
(lhsResolved, rhsResolved) match {
    // Record implementing trait (structural subtyping)
    case (RecordTypeTerm(recordDef, _, _), TraitTypeTerm(traitDef, _)) =>
        checkTraitImplementation(recordDef, traitDef, cause); ()

    // Trait extending trait (structural subtyping)
    case (TraitTypeTerm(childTraitDef, _), TraitTypeTerm(parentTraitDef, _)) =>
        checkTraitExtends(childTraitDef, parentTraitDef, cause); ()

    // Other cases
}

These implementations provide a solid foundation for trait-based programming in Chester, with support for basic field requirements and type inheritance. Future work will focus on more advanced trait features like method implementations, default values, and multiple inheritance.

2025-04-01

Generalized Block Termination Implementation for V2 Parser

Problem Analysis

  • Issue: V2 parser needed a general solution for the }\n pattern without special-casing keywords
  • Core Principle Violation: Previous implementation relied on checking if expressions were Blocks
  • Design Requirement: Need context-free parsing with uniform token pattern detection

Implementation Approach

  • Solution Strategy: Implemented a generalized }\n pattern detection mechanism
  • Key Changes:
    • Modified checkForRBraceNewlinePattern to check previous token instead of expression type
    • Added support for EOF as an implicit newline terminator
    • Renamed separateCaseStatements to processMixedStatements
    • Made statement splitting logic more general while preserving behavior

Technical Implementation

  • Token Pattern Detection:
    • Check if previous token was a closing brace (RBrace)
    • Verify if current token is whitespace containing a newline or EOF
    • Apply termination rules based purely on syntax, not semantics
  • Statement Processing:
    • Preserve existing multiple statements without changes
    • Split single OpSeq statements when they contain multiple logical statements
    • Detect natural statement boundaries at certain identifiers like “case”
    • Maintain consistent behavior with V1 parser

Benefits

  • Alignment with Core Principles:
    • Maintains strict context-free parsing
    • Treats all blocks uniformly
    • Applies consistent rules for expression termination
    • Better separation between syntax and semantics
  • Technical Improvements:
    • More maintainable parser with fewer special cases
    • Simplified codebase with clearer termination rules
    • Better alignment between V1 and V2 parsers
    • All relevant tests now pass with identical behavior

Files Modified

  • reader/src/main/scala/chester/readerv2/LexerV2.scala

This implementation properly adheres to Chester’s core parsing principles by treating all }\n patterns uniformly, regardless of their context.

2025-04-02

Fixed Outdated Pattern in Reducer.scala

During a comprehensive code review to align with Chester’s term architecture principles:

  • Issue Identified: Found outdated pattern in reduceStandard method in Reducer.scala that was using explicit type casting with pattern matching on StmtTerm
  • Fix Applied: Updated the code to maintain type safety while avoiding unnecessary pattern matching
  • Before:
    val reducedStatements = statements.map { case stmt: StmtTerm =>
      r.reduce(stmt).asInstanceOf[StmtTerm]
    }
    
  • After:
    val reducedStatements = statements.map(stmt =>
      // Keep the type information while reducing
      r.reduce(stmt).asInstanceOf[StmtTerm]
    )
    
  • Alignment with Principles: The solution balances Chester’s design principles with type safety requirements by:
    • Avoiding pattern matching with *C and *T suffix traits
    • Keeping necessary type casts for type safety
    • Using more direct and readable code
  • Benefits:
    • More consistent codebase that follows established design principles
    • Type-safe implementation
    • Clearer intent with inline comments
    • Better alignment with the unified Term definition approach

2025-04-27

LexerV2 Refactoring and Plan Cleanup

  • LexerV2 Code Refinements:

    • Introduced peek method in LexerState to look ahead without consuming tokens, simplifying logic.
    • Refactored parseAtom to use peek for cleaner handling of empty objects {} vs block/object start.
    • Introduced withModifiedState helper in LexerState to encapsulate temporary state changes (like newLineAfterBlockMeansEnds), simplifying parseBlock.
    • Minor cleanup in expectedError using f-interpolators.
  • Parser Plan Update:

    • Marked Phase 2 (Advanced Features) as fully complete in parser-plan.md.
    • Marked “V1/V2 Semantic Consistency”, “Object Expressions”, and “Block Termination” priorities as complete.
    • Updated parser-implementation.md to reflect completed status of these features.
    • Consolidated completed tasks from planning/implementation docs into this devlog entry.
    • Current focus remains on Phase 3: Error Handling (Recovery, Messages, Debug Info), Source Maps, Test Migration, and Performance Optimization.
  • Files Modified:

    • reader/shared/src/main/scala/chester/readerv2/LexerV2.scala
    • docs/src/dev/parser-plan.md
    • docs/src/dev/parser-implementation.md
    • docs/src/dev/devlog.md

Effect System Implementation Plan

NOTE THAT THIS DOCUMENT IS OUTDATED AS RELEVANT CODE IS BEING REWRITTEN

Goal

Enable type checking for the simplest effects example in tests/tyck/effects-basic.chester.todo by implementing the necessary compiler components for the built-in effects system.

Current Status

  • We have a basic effects syntax design specified in docs/src/dev/effects-system.md
  • A simple test file tests/tyck/effects-basic.chester.todo exists but doesn’t type check yet
  • Some of the infrastructure for effects already exists in the codebase

Implementation Tasks

1. Parser Enhancements

  • Add parsing support for the / effect annotation syntax in function types
  • Parse built-in effect names (IO, State, etc.) as identifiers
  • Parse effect combinations using the & operator
  • Generate AST nodes that correctly represent effects in function signatures
// Example parser implementation:
def functionDef(): FunctionDef = {
  // Parse function signature...
  
  // Check for effect annotation
  val effects = if (consume("/")) {
    parseEffects()
  } else {
    Effects.Empty
  }
  
  FunctionDef(name, params, returnType, effects, body)
}

2. Built-in Effects Registry

  • Define a registry for built-in effect types
  • Implement recognition of built-in effects like IO and State
  • Add a mechanism to validate that effect identifiers refer to built-in effects
// Example registry:
object BuiltinEffects {
  val IO = "IO"
  val State = "State"
  val Exception = "Exception"
  val NonTermination = "NonTermination"
  
  val all = Set(IO, State, Exception, NonTermination)
  
  def isBuiltinEffect(name: String): Boolean = all.contains(name)
}

3. Type Representation

  • Enhance the Effects class to handle built-in effects
  • Implement function type representation that includes effects
  • Support effect combinations (union of effects)
// Example Effects class:
case class Effects(effectMap: Map[String, Term]) {
  def combine(other: Effects): Effects = {
    Effects(this.effectMap ++ other.effectMap)
  }
  
  def contains(effect: String): Boolean = effectMap.contains(effect)
}

4. Type Checking

  • Implement effect checking during function definition
  • Verify that function bodies only use effects that are declared
  • Track effect requirements in the function call chain
  • Ensure effects are properly propagated from callee to caller
// Example type checker for function calls:
def checkFunctionCall(call: FunctionCall, context: Context): Type = {
  val funcType = checkExpr(call.function, context)
  
  // Check argument types...
  
  // Extract function effects
  val functionEffects = extractEffects(funcType)
  
  // Ensure the current context allows these effects
  checkEffectsAllowed(functionEffects, context.allowedEffects)
  
  // Return function's return type with effects
  ReturnType(funcType, functionEffects)
}

5. Effect Propagation

  • Implement a mechanism to accumulate effects from function calls
  • Ensure effects are propagated up the call chain
  • Handle effect combinations correctly
// Example propagation:
def propagateEffects(callerEffects: Effects, calleeEffects: Effects): Effects = {
  // Combine effects from caller and callee
  callerEffects.combine(calleeEffects)
}

6. Error Reporting

  • Add clear error messages for effect-related type errors
  • Report when a function uses unauthorized effects
  • Report when effect types are invalid or unknown
// Example error generation:
def unauthorizedEffectError(effect: String, context: Context): Error = {
  Error(s"Function uses effect '$effect' but it is not declared in the function signature", 
        context.location)
}

Implementation Order

  1. First Phase: Basic Type Representation

    • Enhance the AST and type system to represent effects
    • Implement the built-in effects registry
    • Add effect annotations to function types
  2. Second Phase: Parser Integration

    • Update the parser to handle effect annotations
    • Parse effect combinations
    • Generate correct AST nodes with effects
  3. Third Phase: Type Checking

    • Implement basic effect checking for function bodies
    • Add effect validation in function calls
    • Ensure effects are correctly propagated
  4. Fourth Phase: Error Handling

    • Add descriptive error messages
    • Implement suggestions for fixing effect-related errors
  5. Fifth Phase: Testing

    • Convert the .todo test to a regular test file
    • Add additional tests for more complex effect scenarios
    • Verify compatibility with existing code

Timeline

  • Days 1-2: First Phase implementation
  • Days 3-4: Second Phase implementation
  • Days 5-7: Third Phase implementation
  • Days 8-9: Fourth Phase implementation
  • Day 10: Testing and refinement

Success Criteria

  • The tests/tyck/effects-basic.chester.todo file can be renamed to tests/tyck/effects-basic.chester and passes type checking
  • The type checker correctly enforces effect constraints
  • Effect propagation works as expected in nested function calls
  • Pure functions don’t require effect annotations
  • Clear error messages are provided for effect-related type errors

Effect System Design

NOTE THAT THIS DOCUMENT IS OUTDATED AS RELEVANT CODE IS BEING REWRITTEN

Overview

The Chester language includes a built-in effect system that enables tracking and controlling side effects. This document outlines the design and implementation of this system.

Core Concepts

  • Effect: A built-in type in the language that represents a capability to perform a specific kind of operation.
  • Effect Values: Built-in values of type Effect (e.g., IO, State, Exception).
  • Effect Requirements: Functions declare which effects they may use with the / Effect syntax.
  • Effect Propagation: Effect requirements automatically propagate up the call chain.
  • Pure Functions: Functions without effect annotations are pure (no side effects).

Syntax

Function Declaration with Effects

def functionName(args): ReturnType / EffectType = body

Examples:

// Function with IO effect
def print(message: String): Unit / IO = ()

// Function with multiple effects
def readAndWrite(): String / (IO & State) = { ... }

Effect Propagation

When a function calls another function with effects, those effects must be declared in the caller’s signature:

def hello(): Unit / IO = {
  print("Hello")  // print has IO effect, so hello needs IO too
}

Built-in Effects

The language provides several built-in effects:

  • IO: Input/output operations (file I/O, console I/O, etc.)
  • State: Mutable state operations
  • Exception: Operations that may throw exceptions
  • NonTermination: Operations that may not terminate

Implementation Notes

The effect system is implemented through:

  1. Type Checking: Functions are verified to declare all effects they use.
  2. Effect Propagation: Functions automatically require any effects used by functions they call.
  3. Effect Handling: The runtime system ensures effects are properly handled.

Future Extensions

Potential future extensions include:

  • User-defined effects
  • Effect polymorphism
  • Effect inference
  • Effect handlers for controlling effect execution

Example Usage

// Function with an IO effect requirement
def print(message: String): Unit / IO = ()

// This function automatically inherits the IO effect
def hello(): Unit / IO = {
  print("Hello")
}

// Pure function with no effects
def pure(): Integer = 123

Chester Elaboration System

Introduction

Chester features two elaboration systems for type checking:

  1. Original Type Checking Logic (chester.tyck): The constraint-based type checking system using propagator networks
  2. New Elaboration System (chester.elab): A modernized, more modular approach to elaboration with a dedicated solver

This document explains both systems, their differences, and provides guidance on how to work with the new elaboration system.

Architectural Comparison

Original System (chester.tyck)

The original type checking system is built around propagator networks with cells and constraints:

  • Uses TyckPropagator for constraint propagation
  • Has a monolithic Elaborater class with specialized components (ElaboraterBlock, ElaboraterFunction, etc.)
  • Relies on CellId references for tracking types
  • Uses a stateful approach for tracking and resolving constraints

New System (chester.elab)

The new elaboration system takes a fundamentally different approach to type checking:

  • Constraint-based Solver: Uses a dedicated solver architecture for tracking and resolving typing constraints
  • Modular Handler System: Each elaboration concern is handled by a dedicated, composable handler
  • Cell-based Representation: Uses cells to represent type variables and constraints in a more structured way
  • More Precise Types: Infers more specific types (e.g., IntTerm instead of IntegerTerm)
  • Handler-driven Architecture: Components like BlockElabHandler and ListOfHandler encapsulate specific elaboration logic

Key Components of the New System

1. Core Interfaces

Elab Trait (Elab.scala) serves as the primary interface for elaboration operations. It provides three key methods:

  • elab: Elaborates an expression against a given type, returning a cell containing the elaborated term
  • infer: Infers both the term and type for an expression, returning them as a pair
  • inferType: Specializes in type-checking expressions that should be types themselves

All these methods require appropriate context, effects tracking, and solver operations to function.

DefaultElab Implementation provides the core elaboration logic for different expression types. It uses pattern matching to handle different kinds of expressions, dispatching each to an appropriate constraint handler:

  • Integer literals are handled by the IntegerLit constraint
  • String literals are handled by the StringLit constraint
  • List expressions are handled by the ListOf constraint
  • Blocks are handled by the BlockElab constraint

For each expression type, a corresponding constraint is created and passed to the solver through SolverOps.

2. Entry Point

DefaultElaborator (Elaborator.scala) is the main entry point for using the new elaboration system. It’s configured with:

  • A default Elab implementation (DefaultElabImpl)
  • A SolverFactory (ConcurrentSolverFactory)
  • A handler configuration (DefaultSolverConf)

This setup provides the inferPure() method that is used by the REPL and tests as the primary entry point for type checking expressions.

3. Constraint Handlers

The system uses a handler-based architecture where each type of constraint has a dedicated handler:

  • Literal Handlers: IntegerLitHandler, StringLitHandler, SymbolLitHandler
  • Block Handler: BlockElabHandler for elaborating code blocks
  • List Handler: ListOfHandler for list expressions
  • Unification Handlers: UnifyHandler, UnifyMultipleHandler for type compatibility
  • Type Handlers: IsTypeHandler for type checking, SimplifyUnionHandler for union types

Each handler implements the Handler trait with a run method that processes a specific type of constraint.

4. Operations Interface

ElabOps (ElabOps.scala) provides operations for error reporting and semantic collection:

case class ElabOps(reporter: Reporter[TyckProblem], collector: SemanticCollector) extends Reporter[TyckProblem] {
  // Delegated reporter methods
  override def report(problem: TyckProblem): Unit = reporter.report(problem)
}

Current Implementation Status

Features Supported

The new elaboration system currently supports:

  • Basic literals (integers, strings, symbols)
  • Lists (including heterogeneous and nested lists with correct union typing)
  • Code blocks with statements and expressions
  • Type unification and compatibility checking
  • Pure expressions (without effects)

REPL Integration

The REPL engine now uses the new elaboration system by default, as seen in REPLEngine.scala:

The REPL implementation includes a toggle (useNewElab) set to true by default that allows switching between the old and new elaboration systems. When enabled, the typeCheck method creates a reporter and ElabOps, then calls DefaultElaborator.inferPure() to type check expressions. The results are wrapped in a TyckResult0 object to maintain compatibility with the old system’s result format.

This implementation allows seamless switching between the old and new elaboration systems, with the new system as the default.

Test Coverage

Test coverage for the new system is implemented in ElabLiteralAndListTest.scala, which verifies:

  • Integer literals: Correctly elaborated to IntTerm with IntType
  • Heterogeneous lists: Elaborated to ListTerm with a union type for elements
  • Empty lists: Properly typed as ListTerm[NothingType]
  • Nested lists: Correctly handle nested list structures and their type relationships

These tests demonstrate the system’s ability to:

  1. Infer precise types (using IntTerm instead of the more general IntegerTerm)
  2. Handle heterogeneity through proper union type creation
  3. Maintain correct type relationships in nested structures

Using the New Elaboration System

Basic Usage

The following example shows how to use the new elaboration system to type check an expression:

To use the new elaboration system, you’ll need to:

  1. Parse an expression using ChesterReaderV2 or another parser
  2. Create a reporter and ElabOps for error collection
  3. Call DefaultElaborator.inferPure() to obtain a Judge containing the elaborated term and type
  4. Check for errors and access the elaborated term and inferred type

This process will properly handle parsing and type checking of various expressions, including heterogeneous lists that will be typed with appropriate union types.

Extending the System with New Expression Types

To add support for a new expression type, you need to follow these steps:

1. Define a Constraint Kind

Create a Kind object for your constraint:

1. Define a Constraint Kind

Create a Kind object in the chester.elab package that defines the type of your constraint. This serves as a unique identifier for your constraint type in the system.

2. Create a Constraint Class

Define a constraint class for your expression type that takes:

  • Your expression type as a parameter
  • The target type cell
  • Required implicit parameters (effects, elab, ops, ctx)

The constraint class should extend the Constraint abstract class with your Kind.

3. Implement a Handler

Create a handler that processes your constraint by implementing the Handler trait. The handler needs to:

  • Override the run method to implement the elaboration logic
  • Create appropriate cells for your results
  • Connect your result to the target type using constraints like Unify
  • Optionally implement defaulting behavior for when type information is incomplete

4. Register the Handler

Add your handler to DefaultSolverConf.scala so the system knows how to process your constraint. This involves adding your handler to the list of handlers in the DefaultSolverConf value.

5. Update DefaultElab

Finally, extend the elab() method in DefaultElab to handle your expression type by adding a pattern matching case for your expression type that calls your constraint.

Example: Adding Support for Boolean Literals

A practical example would be adding support for boolean literals, which would require:

  1. Defining a BooleanLit Kind to identify the boolean literal constraint
  2. Creating a BooleanLit constraint class that takes a BooleanLiteral expression and target type
  3. Implementing a BooleanLitHandler that:
    • Creates a BooleanTerm with the appropriate value
    • Adds a cell containing that term
    • Creates a BooleanType cell
    • Unifies the target type with the boolean type
    • Connects the boolean term to the output cell
  4. Registering the handler in DefaultSolverConf
  5. Adding a case for BooleanLiteral expressions in the DefaultElab.elab method

Transition Guidelines

While both systems currently coexist, the development focus is transitioning to the new chester.elab system. Follow these guidelines when working with the codebase:

For New Development

  • Use DefaultElaborator.inferPure() as the primary entry point for new typechecking code
  • Implement new features and extensions in the chester.elab package
  • Add handlers for new expression types following the pattern shown above
  • Write tests specifically against the new elaboration system

For Maintenance of Existing Code

  • When fixing bugs in the existing chester.tyck system, consider if the fix should also be applied to chester.elab
  • Document cross-references between equivalent functionality in both systems
  • Gradually migrate test cases to support the new system

Testing Approach

  • Use ElabLiteralAndListTest.scala as a reference for test structure and pattern
  • Create test cases that work with both systems to ensure compatibility
  • The REPL’s toggle (useNewElab) allows easy switching between systems for comparison

Best Practices

1. Preserve Original Terms

Consistent with the existing guidelines for the original elaboration system:

  • The elaboration system should preserve the original structure of terms
  • Avoid reduction during elaboration
  • Keep source code structure intact in elaborated results
  • Only use reduction internally during type checking when absolutely necessary

2. Error Reporting

  • Use the ElabOps reporter for consistent error messages
  • Provide detailed type information in error messages
  • Match the error format of the original system for consistency
  • Include source position information when available
  • Use internationalized messages (with t"" string templates)

3. Testing

  • Test both success and failure cases
  • Verify the structure of elaborated terms
  • Check inferred types carefully, especially for complex cases like union types
  • Test with heterogeneous data to verify union type handling
  • Ensure tests cover edge cases like empty collections and nested structures

Current Limitations and Future Work

The new elaboration system is still under development and doesn’t yet support the full range of Chester language features. Current limitations include:

  • Limited support for complex expressions and statements
  • Incomplete handling of advanced type features like traits and interfaces
  • Partial support for effects system
  • Incomplete support for pattern matching

Future development should focus on:

  1. Extending the system to support all Chester language features
  2. Improving error messages and diagnostics
  3. Enhancing performance and error recovery
  4. Eventually replacing the original system entirely

Intellij Idea configurations

  • use nightly update of scala plugin
  • use “Compiler” for scala2
  • in sbt settings, Create separate modules for main and test; Use separate compiler output paths

JavaScript to Python Conversion Process

Overview

This document outlines the process for converting JavaScript code generated from Scala.js into Python-accessible modules. This approach allows Chester functionality written in Scala to be available within Python environments.

Process Flow

The conversion process follows these steps:

  1. Compile Scala.js to JavaScript - Use the sbt jsForPython/fastLinkJS SBT task to compile Scala code to JavaScript
  2. Bundle with Rollup - Use Rollup to combine the generated JavaScript with any needed glue code into a single module
  3. Convert to Python - Use js2py to make the JavaScript functionality accessible from Python

Step-by-Step Implementation

1. Scala.js Compilation

The jsForPython project in build.sbt is configured to compile Scala code to ECMAScript modules with the .mjs extension:

lazy val jsForPython = crossProject(JSPlatform)
  .withoutSuffixFor(JSPlatform)
  .crossType(CrossType.Full)
  .in(file("js-for-python"))
  .settings(
    commonSettings,
    name := "js-for-python"
  )
  .jsConfigure(_.dependsOn(utils.js))
  .jsSettings(
    scalaJSLinkerConfig ~= {
      // Enable ECMAScript module output.
      _.withModuleKind(ModuleKind.ESModule)
        // Use .mjs extension.
        .withOutputPatterns(OutputPatterns.fromJSFile("%s.mjs"))
    }
  )

To compile the Scala.js code, run:

sbt jsForPython/fastLinkJS

This produces JavaScript files in the js-for-python/js/target/ directory.

2. Bundling with Rollup

The rollup.config.mjs file defines how to bundle the generated JavaScript:

import resolve from '@rollup/plugin-node-resolve';
import commonjs from '@rollup/plugin-commonjs';
import babel from '@rollup/plugin-babel';

export default {
  input: 'index.js',
  output: {
    file: 'dist/bundle.js',
    format: 'cjs',
    sourcemap: true,
  },
  plugins: [
    resolve({
      preferBuiltins: false,
    }),
    commonjs(),
    babel({
      babelHelpers: 'bundled',
      presets: [
        ['@babel/preset-env', { targets: { node: "18" } }]
      ]
    })
  ],
};

To bundle the JavaScript code, create an index.js file that imports the generated .mjs files, then run:

pnpm install  # Only need to run this once to install dependencies
pnpm run build

This produces a bundled JavaScript file at dist/bundle.js.

3. JavaScript to Python with js2py

Python Environment Setup with UV

We use the uv package manager for Python dependencies due to its improved performance and reliability. All Python-related work should be done in the js-for-python directory, which contains a .python-version file specifying Python 3.11:

# Navigate to the js-for-python directory
cd js-for-python

# Create a virtual environment with the specified Python version
uv venv -p 3.11

# Activate the virtual environment
source .venv/bin/activate  # On Unix/macOS
# or
# .venv\Scripts\activate  # On Windows

# Install dependencies using requirements.txt
uv pip install -r requirements.txt

Automated Translation Process

The js2py_build.py script in the python directory automates the translation process:

# Usage
python python/js2py_build.py  # Translates the bundle.js to chester.py
python python/js2py_build.py --force  # Forces retranslation even if chester.py exists

This script performs the following steps:

  1. Verifies the bundle.js file exists
  2. Preprocesses the JavaScript to handle js2py compatibility issues
  3. Translates the JavaScript to Python using js2py.translate_file()
  4. Outputs the result to python/chester.py

Usage Guidelines

  1. Expose Scala functions using @JSExportTopLevel:

    @JSExportTopLevel("functionName")
    def functionName(param: Type): ReturnType = {
      // Implementation
    }
    
  2. Bundle only what’s necessary to minimize final bundle size.

  3. Access the Chester functionality from Python:

    # Import the Chester module
    from chester import chester
    
    # Access functions via the Chester global object
    result = chester.Chester.test()
    

Testing

The project includes two test scripts:

1. test_js2py.py

Tests basic js2py functionality with a simple JavaScript example. It:

  • Translates example.js to Python
  • Imports and uses the translated module
  • Tests various js2py features
  • Tests the Chester JS -> Python bridge

To run:

python python/test_js2py.py

2. test_chester.py

Tests the generated Chester Python module. It:

  • Checks if the chester.py module exists and generates it if needed
  • Imports the module and tests available functions
  • Reports any errors

To run:

python python/test_chester.py

Complete Testing Sequence

To fully test the JavaScript to Python conversion:

# 1. Compile Scala.js to JavaScript
sbt jsForPython/fastLinkJS

# 2. Bundle with Rollup
cd js-for-python
pnpm install  # First time only
pnpm run build

# 3. Set up Python environment
uv venv -p 3.11
source .venv/bin/activate
uv pip install -r requirements.txt

# 4. Test js2py and simple JavaScript
python python/test_js2py.py

# 5. Test Chester module
python python/test_chester.py

Project Structure

Current project structure:

js-for-python/
├── js/                          # Scala.js source files
│   └── src/main/scala/chester/
├── python/                      # Python integration
│   ├── js2py_build.py           # Script to translate bundle.js to chester.py 
│   ├── test_js2py.py            # Script for testing js2py functionality
│   ├── test_chester.py          # Script for testing Chester module
│   └── chester.py               # Generated Python module (after build)
├── dist/                        # Bundled JavaScript output
│   └── bundle.js                # Generated JavaScript bundle (after build)
├── index.js                     # Entry point for rollup
├── package.json                 # Node.js package configuration
├── rollup.config.mjs            # Rollup configuration
├── .python-version              # Specifies Python 3.11
└── requirements.txt             # Python dependencies

Troubleshooting

  • CommonJS vs ESM: Ensure module formats are compatible between Scala.js output and Rollup configuration.
  • js2py limitations: js2py has limited ECMAScript compatibility; avoid advanced JS features.
  • Bundle size: Large bundles may impact Python startup time; optimize bundle size when possible.
  • Python version compatibility: js2py works best with Python 3.8-3.11. We’re currently using Python 3.11.
  • Special character handling: js2py doesn’t support functions with special characters in their names (like $) when accessing them directly. Use getattr() instead:
    # Instead of: module.$function()
    getattr(module, "$function")()
    
  • Object serialization issues: When encountering “Cannot convert object to primitive value” errors, explicitly use string conversion:
    // Instead of: "text" + object
    "text" + String(object)
    

Chester Reader Architecture

Design of Chester’s parsers (“readers”) that transform source code into abstract syntax trees.

Overview

Chester currently has two parser implementations:

  1. ReaderV1: The original parser using FastParse combinators
  2. ReaderV2: The newer implementation using a token-based state machine

Both parsers produce semantically identical ASTs using different internal approaches.

Core Design Principles

  1. Context-Free Parsing: Uniform rules for all expressions; identifiers treated consistently
  2. Separation of Concerns: Parse syntax without imposing semantics
  3. Uniform Symbol Treatment: No special keywords - just identifiers and operators
  4. Flat Operator Sequences: Operator precedence handled later in the semantic phase
  5. Newline Significance: }\n terminates expressions in blocks
  6. Block Return Values: Last expression in a block is its return value

ReaderV1 Implementation

ReaderV1 uses the FastParse library to implement a parser combinator approach.

Key Components

  • TokenParsers: Small parsers for basic lexemes (identifiers, literals, operators)
  • Combinators: Composable functions that build larger parsers from smaller ones
  • ParsingContext: Tracks parsing state (e.g., whether currently in an operator sequence)
  • ExprMeta: Metadata handling for source positions and comments

Characteristics

  • Declarative grammar definitions
  • FastParse-based error reporting
  • Recursive descent parsing model

Implementation Structure

ReaderV1 consists of:

  1. Expression Parsers: Methods like parseExpr, parseAtom, and parseOperator form the core of the parser. They use FastParse combinators to build complex parsers from simpler ones.

  2. Context Tracking: A ParsingContext object tracks the current parsing state, including whether we’re in an operator sequence, a block, or other specialized contexts.

  3. Source Position Tracking: Dedicated methods map character positions to line/column positions for error reporting, with special handling for UTF-16 surrogate pairs.

  4. Whitespace and Comment Handling: Dedicated parsers for whitespace, line endings, and comments ensure these elements are preserved in the AST.

  5. Parser Extensions: Custom extension methods for FastParse parsers add support for metadata attachment, relaxed parsing, and error recovery.

  6. Parser Composition: The implementation composes smaller parsers into larger ones, following FastParse’s combinator approach.

ReaderV2 Implementation

ReaderV2 uses a custom tokenizer and a state machine-based approach for parsing, with significant improvements to block termination detection and object expression parsing.

Key Components

  • Lexer: Converts source code into a stream of tokens for efficient parsing
  • ReaderState: Tracks current token position, history, and pending whitespace/comments
  • ReaderContext: Contains context flags like newLineAfterBlockMeansEnds for parsing decisions
  • Token: Represents tokens like identifiers, operators, literals, with source position information
  • Token Handlers: Specialized methods for parsing different token types and structures

Characteristics

  • Pre-tokenization for efficient token stream processing
  • Separate lexing and parsing phases for cleaner code organization
  • Context-aware parsing with explicit state tracking
  • Enhanced UTF-16 aware Unicode and emoji handling
  • Robust block termination detection with the }\n pattern
  • Comprehensive object expression support with multiple key types
  • Optimized comment handling and attachment

Implementation Structure

ReaderV2 consists of:

  1. Two-Phase Parsing: Separates tokenization from parsing, with a dedicated Tokenizer creating a stream of tokens before parsing begins.

  2. State Management: The parser maintains state through two complementary objects:

    • ReaderState: Tracks token position, history, and pending whitespace/comments
    • ReaderContext: Contains context flags like newLineAfterBlockMeansEnds for syntactic decisions
    • Together they enable precise tracking of parser state and contextual information
  3. Context-Aware Processing: Context flags enable important syntactic decisions like proper block termination with the }\n pattern, while maintaining uniform symbol treatment.

  4. Optimized Comment Handling: Non-recursive methods like skipComments() and pullComments() efficiently manage comment attachment, replacing the previous recursive approach.

  5. Robust Block Termination: The special }\n pattern detection is implemented in the checkForRBraceNewlinePattern() method, which uses the newLineAfterBlockMeansEnds flag from ReaderContext to determine when blocks should end.

  6. Enhanced Object Expressions: Support for multiple key types:

    • Identifier keys (e.g., { x = 1 })
    • String literal keys (e.g., { "x" = 1 })
    • Symbol literal keys (e.g., { 'x = 1 })
    • Both = and => operators in object clauses
  7. Error Handling: The parser produces structured ParseError objects with detailed source position information and recovery mechanisms.

  8. Bottom-Up Construction: Parsing builds expressions from atoms and then extends them through continuation-based parsing in parseRest().

Key Similarities Between Implementations

Both parsers:

  1. Track source positions for error reporting
  2. Preserve comments in the AST
  3. Handle the }\n block termination pattern
  4. Produce flat operator sequences without precedence handling
  5. Parse the same language syntax
  6. Use context tracking for parsing decisions
  7. Generate identical AST structures

Key Differences Between Implementations

FeatureReaderV1ReaderV2
Parsing ApproachParser combinators (FastParse)Token-based state machine
Error RecoveryLimitedEnhanced with token-based recovery
Token CreationOn-demand during parsingSeparate tokenization phase
State HandlingImplicit in parse contextExplicit in ReaderState
Code StructureGrammar-centricProcess-centric
PerformanceGoodBetter (especially on large files)
Unicode SupportBasicEnhanced with better UTF-16 handling

Testing Infrastructure

Chester’s test framework validates parser correctness and compatibility between V1 and V2 implementations. This framework, defined in reader/shared/src/test/scala/chester/reader/parseAndCheck.scala, provides several key testing functions:

Core Testing Functions

  1. Parser-Specific Testing:

    • parseV1(input): Parses input with V1 parser only and returns the result
    • parseV2(input): Parses input with V2 parser only and returns the result
    • parseAndCheckV1(input, expected): Tests V1 parser against expected output
    • parseAndCheckV2(input, expected): Tests V2 parser against expected output
  2. Cross-Parser Verification:

    • parseAndCheckBoth(input, expected): Tests both parsers and ensures they produce identical results
    • Tests backward compatibility and feature parity
  3. Top-Level Parsing:

    • parseTopLevelV1/V2 and parseAndCheckTopLevelV1/V2/Both: Similar functions for testing top-level parsing
    • Handle file-level parsing with multiple expressions

Error Reporting

The testing framework provides error reporting with:

  • Detailed error messages showing exact failure position
  • Visual pointer to error location in source code
  • Context-aware error descriptions
  • Comparison between expected and actual AST structures

Serialization Verification

The framework also tests that parsed expressions can be correctly serialized and deserialized:

  • Verifies JSON serialization with read[Expr](write[Expr](value))
  • Confirms binary serialization with readBinary[Expr](writeBinary[Expr](value))
  • Ensures AST structures maintain integrity through serialization cycles

Test Organization

Parser tests are organized into several categories:

  1. Expression Tests: Verify parsing of individual expression types
  2. Integration Tests: Test combined language features
  3. Regression Tests: Ensure previously fixed issues don’t reoccur
  4. Migration Tests: Track progress in supporting V1 features in V2

File-Based Testing

In addition to the core testing functions, Chester implements file-based integration tests:

  • FileParserTest.scala: Tests ReaderV2 against a suite of test files in tests/parser directory
  • FileParserTestV1.scala: Tests ReaderV1 against the same test suite for comparison

These file-based tests:

  • Ensure consistency when parsing complete Chester files
  • Verify parser behavior across a wide range of syntax combinations
  • Automatically generate expected output for regression testing
  • Maintain backward compatibility during parser evolution

Future Development

ReaderV2 is the focus of ongoing development, with priorities including:

  1. Completing error recovery implementation
  2. Adding source maps support
  3. Migrating any remaining V1-only tests
  4. Expanding test coverage
  5. Optimizing token handling for better performance

See devlog.md for chronological implementation details.

ScalablyTyped Bindings Guide

This document provides information about where to find and how to use the ScalablyTyped bindings in the Chester project.

Where to Find the Bindings

The ScalablyTyped bindings for external libraries like ts-morph are generated during the build process and stored in the following locations:

Generated Source Files

The actual Scala source files for the bindings are located at:

js-typings/.js/target/streams/_global/stImport/_global/streams/sources/t/ts-morph/src/main/scala/typings/tsMorph/

Within this directory:

  • mod/ directory contains all the main classes and traits
  • anon/ directory contains anonymous types
  • tsMorphBooleans.scala, tsMorphInts.scala, etc. contain constants and types

Important Files for ts-morph

Key files for working with ts-morph:

  1. mod/SourceFile.scala - Contains the SourceFile class definition
  2. mod/StatementedNode.scala - Contains methods for accessing interfaces, classes, and type aliases
  3. mod/Project.scala - Contains the Project class for creating and working with projects

Using the Bindings in Code

There are two approaches to using ts-morph from Scala.js:

Approach 1: Using ScalablyTyped Bindings Directly

To use the ScalablyTyped bindings in your code:

  1. Import the correct namespace:

    import typings.tsMorph.mod
    
  2. When working with SourceFile to access interfaces, classes, and type aliases, note that:

    • SourceFile does not directly extend StatementedNode
    • Use methods like getInterfaces(), getClasses(), and getTypeAliases() from the appropriate trait
    • Convert JavaScript arrays to Scala lists using js.Array.from(...).toList
  3. Example access pattern:

    val project = new mod.Project()
    val sourceFile = project.addSourceFileAtPath(filePath)
    val interfaces = js.Array.from(sourceFile.getInterfaces()).toList
    

Approach 2: Using Direct JavaScript Interop

For simpler integration, especially when facing type mismatches or method access issues, you can use direct JavaScript evaluation:

import scala.scalajs.js
import scala.scalajs.js.annotation.*

def analyzeTsFile(filePath: String): String = {
  // Use direct JavaScript interop with triple quotes for multiline JavaScript
  js.Dynamic.global.eval(s"""
    function analyze(filePath) {
      const { Project } = require("ts-morph");
      
      try {
        const project = new Project();
        const sourceFile = project.addSourceFileAtPath(filePath);
        
        // Use native JavaScript APIs directly
        const interfaces = sourceFile.getInterfaces().map(interface => ({
          name: interface.getName(),
          // ... other properties
        }));
        
        return JSON.stringify(interfaces);
      } catch (e) {
        return JSON.stringify({ error: e.message });
      }
    }
    
    analyze("${filePath}");
  """).asInstanceOf[String]
}

This approach:

  • Avoids type mismatches and casting issues
  • Uses native JavaScript directly
  • Returns results as JSON strings
  • Escapes Scala-JavaScript interpolation issues using triple quotes

Common Gotchas

  1. Array Conversion: JavaScript arrays need to be converted to Scala collections using js.Array.from(...).toList

  2. Method Names: Some method names in the bindings may differ from the original TypeScript API

  3. Inheritance Hierarchy: The inheritance hierarchy in the bindings may not match the TypeScript original exactly

  4. Type Conversion: Sometimes explicit type conversion is needed when working with the bindings

  5. String Interpolation: When using direct JavaScript eval, use Scala triple quotes (""") and properly escape JavaScript template literals (${...} to $${...})

Rebuilding Bindings

If you need to rebuild the ScalablyTyped bindings:

  1. Run sbt "jsTypings/stImport" to regenerate the bindings
  2. For troubleshooting binding generation, check jsTypings/stOutputs

Comprehensive Type System Improvement Proposal

NOTE THAT THIS DOCUMENT IS OUTDATED AS RELEVANT CODE IS BEING REWRITTEN

This document outlines specific proposed improvements to Chester’s type checking system, focusing on dependent types, union and intersection types, and the integration between the reducer and type checker.

1. Overview and Background

Chester’s type system is based on a constraint propagation network where:

  • Type constraints are represented by propagators
  • Cells hold type information and track their propagators
  • Two types of propagator connections:
    • Reading: Propagators that read from a cell
    • Zonking: Propagators that can write to or resolve a cell

Recent improvements have focused on enhancing support for dependent types, which require:

  1. Types that can depend on terms
  2. Variable bindings in types with alpha-equivalence checking
  3. Sophisticated reduction strategies for type equality

2. Current Status and Progress

Note: Many key improvements have been completed and documented in the devlog entries. Refer to docs/src/dev/devlog.md for details on completed improvements, particularly those related to type system enhancements, trait implementation, and the new AutoConnect propagator.

3. Remaining Issues and Implementation Plan

3.1 ✅ Union Type Subtyping Implementation

Status: COMPLETED

The union type implementation has been completed with full support for the pipe operator (|) syntax and proper subtyping relationships. All components have been implemented as documented in the devlog.

Completed implementations include:

  • Union-to-Union subtyping (A|B <: C|D)
  • Specific-to-Union subtyping (A <: B|C)
  • Union-to-Specific subtyping (A|B <: C)
  • Cell coverage mechanisms for union types
  • Proper type checking for union types in all contexts

3.2 ✅ Removal of the EnsureCellCoverage Hack

Status: COMPLETED

The EnsureCellCoverage hack has been replaced with a proper AutoConnect propagator that establishes meaningful type relationships. See the devlog for implementation details.

Key improvements include:

  • Analysis of term structure to create proper type connections
  • Smart handling of union and intersection types
  • Specialized support for function calls and their arguments
  • Default value support for truly unconstrained type variables
  • Complete removal of all EnsureCellCoverage instances

All success criteria have been met:

  • All cells are covered by meaningful propagators with actual logic
  • No explicit EnsureCellCoverage instances in the codebase
  • Union types and other complex types work correctly
  • Type errors are detected and reported accurately
  • Tests pass with proper constraint checking
  • Improved code clarity and maintainability

3.3 Enhanced Type-Level Function Application Reduction

Current Limitation

The current type checker supports basic type-level function applications, but has limited handling of nested or recursive function applications in type-level contexts. When complex type-level expressions involve multiple nested function calls, the reducer may not properly evaluate them during type checking, leading to:

  1. Type errors due to incomplete reduction of nested function applications
  2. Reduced flexibility when using type-level functions in complex ways
  3. Unclear error messages when type-level function applications fail

Implementation Plan

The implementation requires focused changes to:

  1. DefaultReducer Enhancement:

    • Improve handling of nested type-level function applications
    • Implement recursive handling of type-level function results
    • Ensure consistent reduction behavior for composed functions
  2. Type Checking Integration:

    • Further enhance the tryUnify method to handle complex function call terms
    • Ensure proper cell coverage for nested function applications
    • Add guards to prevent pattern matching conflicts

Testable Example

// Test enhanced type-level function application
record A(a: Integer);
record B(b: String);

// Basic identity function for types
def idType(x: Type): Type = x;

// Function composition at the type level
def composeTypes(f: Type -> Type, g: Type -> Type, x: Type): Type = f(g(x));

// Test basic composition
let aT = composeTypes(idType, idType, A);
def getA(x: aT): Integer = x.a;  // Should work via reduction

Success Criteria

  1. Nested function applications are properly reduced during type checking
  2. Field access on types produced by composed functions works correctly
  3. The original function applications are preserved in elaborated results
  4. No “cells not covered by any propagator” errors occur during zonking

4. Testing Strategy

4.1 Create Specialized Tests for Dependent Types

Implement tests that verify:

  • Function types with type dependencies
  • Equality of types with different variable names but same structure
  • Union and intersection types with alpha-equivalent components

4.2 Union Type Testing Cases

Completed test cases:

// Widening (Success)
def f(x: Integer): Integer | String = x;
f(42);

Additional test cases needed:

// Nested Union Types
def complex(x: (Integer | String) | Boolean): (Integer | String) | Boolean = x;

// Union with parametric types
def generic<T>(x: T | String): T | String = x;

// Intersection with union
def mixed(x: (A & B) | C): (A & B) | C = x;

5. Implementation Steps

5.1 Phase 1: Core Type System Improvements

Status: Most core improvements have been completed.

Completed improvements (see devlog for details):

  • ✅ Enhanced Type Structure Reduction in DefaultReducer
  • ✅ Alpha-Equivalence Checking in TyckPropagator
  • ✅ Enhanced Type Level Comparison
  • ✅ Cell Coverage Mechanisms with AutoConnect propagator
  • ✅ Union Type Subtyping Implementation
  • ✅ Basic Trait Implementation

Remaining items to complete:

  • Add more complex test cases for nested union types
  • Complete enhanced type-level function application for complex nested cases
  • Add test cases for complex type-level functions

5.2 Phase 2: Advanced Type Features

  • Test with complex type-level function examples
  • Verify all success criteria are met
  • Add more test cases for edge cases
  • Document implementation details and usage patterns

6. Design Principles to Follow

6.1 Term Preservation

  • Keep original terms in elaborated results
  • Never reduce during elaboration
  • Only use reduction for type-level comparisons
  • Preserve source structure for better error reporting

6.2 Reduction Strategy

  • Only reduce during type equality checking
  • Use ReduceMode.TypeLevel for these internal reductions
  • Use proper reduction context from current context
  • Never reflect internal reductions in output

6.3 Documentation

  • Keep this document updated with implementation progress
  • Document design decisions and trade-offs
  • Maintain clear test cases for each feature

7. Success Criteria

  1. All tests pass, including the specialized dependent type tests
  2. The type checking system correctly handles:
    • Complex dependent type scenarios
    • Intersection type comparisons
    • Union type subtyping
    • Type-level function applications
  3. Documentation clearly explains dependent type concepts and usage patterns
  4. Meta variables in complex types resolve correctly

8. Running Tests

To run the tests for the type checker specifically, use the following SBT command:

# Run the FilesTyckTest suite to test the type checking system
sbt "rootJVM/testOnly chester.tyck.FilesTyckTest"

⚠️ IMPORTANT WARNING: Do NOT use the -z test filter option (e.g., sbt "rootJVM/testOnly -- -z pattern") as it is broken and produces unreliable results. Instead, run the entire test suite and review the results.

8.5 Quick Test Guide

Basic Test Cases By Feature

Below are minimal test cases for each feature that can be used for quick verification. Create these in separate .chester files in the tests/tyck/ directory to validate implementation.

1. Union Type Tests

// FILE: union_basic.chester
// Tests basic union type assignment
let a: Integer | String = 42;
let b: Integer | String = "hello";
// FILE: union_function.chester
// Tests function with union type parameters and return
def identity(x: Integer | String): Integer | String = x;
let a = identity(42);
let b = identity("hello");
// FILE: union_specific.chester
// Tests union-to-specific subtyping
def expectNumber(x: Number): Number = x;
let a: Integer | Float = 42;
expectNumber(a);  // Should succeed if Integer|Float <: Number

2. Trait Tests

// FILE: trait_basic.chester
// Tests basic trait implementation
trait Showable {
  def show: String;
}

record Person(name: String, age: Integer) <: Showable {
  def show: String = name;
}

let p = Person("Alice", 30);
let s: Showable = p;
let name = s.show;
// FILE: trait_field.chester
// Tests trait field access
trait HasName {
  def name: String;
}

record User(name: String, email: String) <: HasName;

def getName(x: HasName): String = x.name;
let u = User("Bob", "bob@example.com");
getName(u);

3. Type-Level Function Tests

// FILE: type_function.chester
// Tests type-level function application
def idType(x: Type): Type = x;

record Point(x: Integer, y: Integer);
let pointType = idType(Point);

def makePoint(p: pointType): pointType = p;
let p = Point(1, 2);
makePoint(p);

Running Individual Test Cases

To run a specific test file:

# Compile and run a specific test file
sbt "rootJVM/testOnly chester.tyck.FilesTyckTest -- -only add.chester"

Test Verification Checklist

For each test case, verify:

  1. ✅ Compilation succeeds without errors
  2. ✅ Type checks pass correctly
  3. ✅ No “cells not covered by any propagator” errors
  4. ✅ Error messages are clear and helpful when intentional errors are introduced

9. ✅ Trait Implementation Plan

Status: BASIC IMPLEMENTATION COMPLETED

Basic trait implementation has been completed and documented in the development log. See the devlog for implementation details.

Future Enhancements for Traits

While basic trait functionality is working, the following enhancements are planned for future implementation:

  • Complete field requirement verification for traits
  • Multiple trait inheritance support
  • Trait method and default implementations
  • More comprehensive trait test cases
  • Advanced trait composition patterns

10. ✅ Detailed Plan for Complex Union Types

Status: BASIC IMPLEMENTATION COMPLETED

Many of the planned union type improvements have been completed and documented in the devlog. The implementation includes proper handling of union subtyping and cell coverage through the new AutoConnect propagator.

Future Work for Complex Union Types

To further extend support for complex union types (nested unions, unions with generic types), the following detailed technical steps are still needed:

  1. Enhance Parser and Desalter Support:

    • Improve handling of parenthesized union types
    • Add special handling for union types with generic parameters
    • Support deeper nested union types
  2. Strengthen Type Checking for Advanced Cases:

    • Enhance handling of nested unions by “flattening” when appropriate
    • Improve error reporting for complex union scenarios
    • Update unification to handle multi-level nested unions properly
  3. Expand the Test Suite:

    • Add specific unit tests for each subtyping scenario
    • Test nested unions with different depths
    • Test unions with different combinations of types
    • Test function types with union parameters and return types
    • Test generic type parameters with union bounds
    • Test more complex scenarios combining unions, intersections, and generic types

Success Metrics

The implementation will be considered successful when:

  1. All test cases pass without errors
  2. The type checker correctly handles complex nested unions
  3. No “cells not covered” exceptions occur
  4. Type error messages are clear and help identify the issue
  5. The implementation can scale to handle arbitrarily complex union types

11. Experimental Implementation Notes: Union Type Improvements

During an experimental implementation attempt to fix union type handling, the following specific changes were made to improve union type subtyping and prevent cell coverage errors:

11.1 Union-to-Specific Type Relationship Changes

The key issue identified was in how union types were converted to specific types (e.g., Integer | String to Integer):

Original Implementation Issues:

  • In TyckPropagator.scala, the unionSpecificCompatible method only checked if ANY component of the union was compatible with the specific type
  • This allowed unsound expressions like def f(x: Integer | String): Integer = x to type-check

Specific Changes Made:

  • Modified unionSpecificCompatible method to check if ALL union components are compatible with the specific type
  • Changed the implementation logic from unionTypes.exists(compatible) to !unionTypes.exists(!compatible)
  • Added explicit error reporting in handleUnionSpecific method within the Unify case class
  • Enhanced debug output to include step-by-step component compatibility checks
  • Modified Elaborater.unify method’s union case to properly handle specific-to-union compatibility

11.2 DefaultValuePropagator Implementation

To solve the “cells not covered by any propagator” errors:

Specific Implementation Changes:

  • Rewritten ensureDefaultValue[T] method (line ~1087 in TyckPropagator.scala):

    • Added explicit state.addPropagator call to add a dedicated propagator
    • Added explicit state.fill call to ensure cells have values
    • Added diagnostic logging to track cell state
  • Created a new DefaultValuePropagator[T] case class with:

    • Set score = 100 to give it high priority
    • Implemented run, defaulting, and naiveFallbackZonk methods
    • Added proper cell tracking with readingCells, writingCells, and defaultingCells
    • Added custom identify method returning Some(id) to prevent propagator removal

11.3 Infinite Recursion Prevention

Specific changes to break cyclic dependencies:

  • In UnionOf propagator:

    • Removed recursive calls to unify that could cause infinite loops
    • Used early returns and simplified value checking logic
    • Added guard conditions before recursive propagator creation
  • In the handleUnionSpecific method:

    • Used ensureDefaultValue for each union component instead of creating linked propagators
    • Added filtered component selection before creating propagator connections
    • Used explicit value discarding with statements like val _ = ensureDefaultValue(unionType)

11.4 Test File Updates

Specific file changes:

  • Moved tests/tyck-fails/union-subtype-fail.chester.todo to active status by removing .todo extension
  • Updated the expected behavior of def f(x: Integer | String): Integer = x to correctly fail
  • Modified tests/tyck/simple-union.chester.todo to ensure it properly tests union widening

11.5 Key Identifier Changes for Future Reference

The most significant method and identifier changes were:

  • unionSpecificCompatible: Changed compatibility check logic
  • handleUnionSpecific: Rewrote to handle ALL union components
  • ensureDefaultValue: Enhanced with propagator creation
  • Added new DefaultValuePropagator class
  • Modified Unify.defaulting case for union handling
  • Updated union component debug logging using Debug.debugPrint

These changes collectively represent an approach to fixing union type subtyping that ensures sound type checking while preventing “cells not covered” errors through improved propagator management.

Type Checking System: Propagator Network and Design

NOTE THAT THIS DOCUMENT IS OUTDATED AS RELEVANT CODE IS BEING REWRITTEN

Quick Start Guide

Chester’s type checking system is powered by a propagator network - a constraint-based approach that allows for complex type relationships.

Simple Visual Overview

                      ┌─────────────┐
                      │ Type System │
                      └─────────────┘
                             │
                             ▼
                     ┌───────────────┐
                     │ Type Checking │
                     └───────────────┘
                             │
                             ▼
┌────────────────────────────────────────────────────┐
│                 Propagator Network                 │
├────────────────┬─────────────────┬────────────────┤
│                │                 │                │
│    ┌───────┐   │    ┌───────┐    │   ┌───────┐   │
│    │ Cell  │◄──┼────┤Prop 1 │◄───┼───┤ Cell  │   │
│    └───────┘   │    └───────┘    │   └───────┘   │
│        ▲       │        ▲        │       ▲       │
│        │       │        │        │       │       │
│    ┌───────┐   │    ┌───────┐    │   ┌───────┐   │
│    │Prop 2 │◄──┼────┤ Cell  │◄───┼───┤Prop 3 │   │
│    └───────┘   │    └───────┘    │   └───────┘   │
│                │                 │                │
└────────────────┴─────────────────┴────────────────┘

Key Concepts in 30 Seconds

  1. Cells - Hold type information and track connections to propagators
  2. Propagators - Define constraints between cells and propagate type information
  3. Network - The collection of cells and propagators that work together

When type checking:

  • Cells store type information (like “this variable has type Integer”)
  • Propagators enforce constraints (like “parameter type must match argument type”)
  • When a cell value changes, connected propagators activate to propagate that change

This reactive network allows complex type relationships (like union types) to be modeled effectively.

Architecture

1. Core Components

  1. Cells (HoldCell)

    • Hold type information and state
    • Track propagator connections:
      class HoldCell[+T <: CellRW[?,?]] {
        var store: CellRW[?,?]              // Current value
        var didChange: Boolean          // Change tracking
        var readingPropagators: Vector[PIdOf[Propagator[?]]]
        var zonkingPropagators: Vector[PIdOf[Propagator[?]]]
      }
      
  2. Propagators

    • Base trait defining propagator behavior:
      trait Propagator[Ability] {
        def readingCells: Set[CellIdAny]
        def writingCells: Set[CellIdAny]
        def defaultingCells: Set[CellIdAny]
        def run(using state: StateAbility[Ability], more: Ability): Boolean
        def naiveZonk(needed: Vector[CellIdAny]): ZonkResult
      }
      

2. Key Propagator Types

  1. Unify Propagator

    • Handles type unification and subtyping
    • Special cases for:
      • Meta variables
      • Union types
      • Intersection types
      • List types
      • Record types
  2. UnionOf Propagator

    • Manages union type construction
    • Handles:
      • Component type collection
      • Meta variable resolution
      • Type compatibility checks
  3. LiteralType Propagator

    • Handles literal type inference
    • Manages type constraints for literals

3. Propagation Process

  1. Registration

    def addPropagatorGetPid[T <: Propagator[Ability]](propagator: T) {
      // 1. Create propagator holder
      val id = new HoldPropagator[T](uniqId, propagator)
      
      // 2. Register with reading cells
      for (cell <- propagator.readingCells) {
        cell.readingPropagators :+= id
      }
      
      // 3. Register with zonking cells
      for (cell <- propagator.defaultingCells) {
        cell.zonkingPropagators :+= id
      }
      
      // 4. Initial run
      if (propagator.run) {
        id.alive = false
      }
    }
    
  2. Execution

    def tick(using more: Ability): Unit = {
      while (didChanged.nonEmpty) {
        val id = didChanged.removeHead()
        if (id.didChange) {
          id.didChange = false
          // Run reading propagators
          for (p <- id.readingPropagators if p.alive) {
            if (p.store.run) {
              p.alive = false
            }
          }
        }
      }
    }
    

3. Union Type Subtyping

Chester supports union types (A|B) with a sophisticated subtyping relationship managed by the propagator network. The subtyping rules are implemented in the unify method in Elaborater.scala.

Union Subtyping Rules

  1. Union-to-Union Subtyping: (A|B) <: (C|D)

    • For each type in the right union, at least one type in the left union must accept it
    • Implemented by creating propagator connections between compatible component types
  2. Specific-to-Union Subtyping: A <: (B|C)

    • A specific type can be used where a union is expected if it’s compatible with any union member
    • Example: Passing an Integer to a function expecting Integer|String
  3. Union-to-Specific Subtyping: (A|B) <: C

    • A union can be assigned to a specific type if all union members are compatible with that type
    • Example: Returning an Integer|Float from a function that promises to return Number

Implementation Challenges

The union type subtyping implementation addresses several challenges:

  1. Cell Coverage: Ensuring all cells (including component types) are properly covered by propagators
  2. Meta Variables in Unions: Special handling for meta variables that appear in unions
  3. Early Return Prevention: Avoiding early returns that could leave cells uncovered
  4. Component Tracking: Ensuring each component of a union has proper propagator connections

Current Implementation Status

The implementation of the core type system features has been completed with recent significant enhancements. Major milestones include:

1. Cell Coverage Improvements (2025-04-21)

The previously “hacky” approach to cell coverage has been completely redesigned:

  • Removed the AutoConnect propagator and related indirection
  • Implemented direct type relationship handling during unification
  • Created explicit relationships between types directly at creation/unification points
  • Simplified codebase by removing several layers of indirection
  • Maintained the same type checking capabilities with a cleaner implementation

2. Union Type Subtyping (2025-03-25)

Union types are now fully implemented with support for all subtyping relationships:

  • Union-to-Union: (A|B) <: (C|D) with proper component compatibility
  • Specific-to-Union: A <: (B|C) for cases like passing Integer to Integer|String
  • Union-to-Specific: (A|B) <: C for returning unions from specific return type functions

3. Trait Implementation (2025-03-19)

Basic trait support is now available:

  • Empty traits and record extension using <: syntax
  • Trait-record subtyping relation in type system
  • TraitTypeTerm representation with proper error reporting
  • Context tracking for trait processing

Remaining Challenges

  1. Type-Level Computation

    • Further improvements to recursive type-level function applications
    • Advanced dependent types with multiple levels of abstraction
    • More comprehensive testing of type-level computation
  2. Advanced Trait Features

    • Complete field requirement verification
    • Multiple trait inheritance
    • Trait methods and default implementations
    • Trait-to-trait inheritance constraints

Testing Strategy

1. Coverage Tests

  • Test basic cell coverage
  • Test union type component coverage
  • Test meta variable coverage

2. State Tests

  • Test propagator lifecycle
  • Test union type state
  • Test meta variable resolution

3. Integration Tests

  • Test complex type scenarios
  • Test error handling
  • Test performance

Trait Implementation

Chester’s type system now supports traits and record-trait subtyping relationships through the <: syntax, with the following features implemented:

1. Trait Definition and Implementation

// Define a trait
trait WithName {
  def name: String;
}

// Record implementing a trait
record Person(name: String, age: Integer) <: WithName;

// Using the record with correct field
def getName(p: Person): String = p.name;

2. Trait Subtyping Rules

The type system implements several trait-related subtyping rules:

  1. Record-Trait Subtyping: Records that extend traits are considered subtypes of those traits
  2. Trait-Record Compatibility: Traits can be used where their implementing records are expected
  3. Trait-Trait Inheritance: Traits can extend other traits

3. Implementation Components

The trait implementation consists of several key components:

  1. TraitTypeTerm in Term.scala for trait type representation
  2. TraitStmtTerm for trait definitions with optional bodies
  3. processTraitStmt in ElaboraterBlock.scala to handle trait declarations
  4. checkTraitImplementation in TyckPropagator.scala to verify trait implementation
  5. Special context tracking with withProcessingType to handle trait bodies

4. Future Enhancements

Future work will focus on:

  • Complete field requirement verification
  • Multiple trait inheritance
  • Trait methods and default implementations

Best Practices for Cell Management

OnceCell Usage Guidelines

The OnceCell implementation is a specialized cell type that enforces single-assignment semantics. Proper usage requires careful attention to avoid errors:

1. Cell Filling Best Practices

  • Always check before filling: Before calling fill() on any cell, check if it already has a value using state.readUnstable(cell).
  • Handle already-filled cells gracefully: When a cell already has a value, check if new value equals existing value.
  • Use debug assertions: Include debug prints for cell operations to trace propagation flow.

2. Propagator Implementation Guidelines

  • Declare cell dependencies correctly: Ensure all propagators correctly declare their reading and zonking cell dependencies.
  • Implement naiveZonk correctly: The naiveZonk method should return appropriate dependencies for zonking.
  • Be idempotent: Propagator’s run method should be idempotent - multiple calls with the same input should produce the same output.

3. Type-Level Function Handling

When implementing propagators that handle type-level functions:

  • Watch for recursive effects: Type-level functions may cause recursive propagation attempts
  • Avoid modifications during reduction: When reducing for type equality, don’t modify the original terms
  • Use correct reduction mode: Use ReduceMode.TypeLevel for reductions needed for type equality checks

TypeScript Backend Implementation

Overview

This document outlines the implementation plan for a TypeScript backend in the Chester compiler. The backend will take type-checked Chester code and generate equivalent TypeScript code, leveraging the existing JavaScript AST infrastructure.

Goals

  • Create a TypeScript code generator that uses the existing JavaScript AST
  • Ensure proper handling of Chester’s type system in TypeScript output
  • Support TypeScript-specific features like interfaces, type annotations, and generics
  • Maintain type safety between Chester and TypeScript

Current Status

  • The JavaScript AST (compiler/shared/src/main/scala/chester/targets/js/AST.scala) already includes TypeScript-specific nodes
  • We need to implement the transformation from Chester’s type-checked AST to TypeScript AST
  • We need to implement a code generator for TypeScript

Implementation Tasks

1. TypeScript AST Enhancements

While the existing JavaScript AST includes TypeScript nodes, we may need to extend it with additional TypeScript-specific features:

  • Ensure all TypeScript type annotations are properly represented
  • Add support for TypeScript-specific syntax like readonly, namespace, etc.
  • Implement TypeScript module system support

2. Chester to TypeScript Type Mapping

Create a mapping between Chester’s type system and TypeScript types:

Chester TypeTypeScript Type
Integernumber
Stringstring
Booleanboolean
Unitvoid
Typeany
FunctionFunction
Recordinterface
Unionunion type
Effect(see below)

3. Effect System Handling

For Chester’s effect system, we have several options for TypeScript representation:

  1. Type-based approach: Represent effects as part of function types

    type IOEffect<T> = T & { readonly __io: unique symbol };
    type StateEffect<T> = T & { readonly __state: unique symbol };
    
  2. Comment-based approach: Use TypeScript comments to document effects

    /** @effect IO */
    function print(message: string): void { ... }
    
  3. Runtime checking: Implement runtime effect checking in TypeScript

    function withEffects<T>(fn: () => T, effects: Effect[]): T { ... }
    

The recommended approach is #1, as it provides compile-time checking in TypeScript.

4. Code Generator Implementation

Implement a TypeScript code generator with the following components:

  • AST Transformer: Convert Chester AST to TypeScript AST
  • Type Transformer: Convert Chester types to TypeScript types
  • Effect Transformer: Handle effect annotations
  • Code Emitter: Generate TypeScript code from the AST

Code Emitter

Transforms the TypeScript AST into a valid TypeScript code string.

Implementation Plan

The backend involves several key components:

1. AST Definition (js.AST.scala)

Defines the structure of the target JavaScript/TypeScript Abstract Syntax Tree (AST).

2. AST Transformer

Converts the type-checked Chester core AST (chester.syntax.core.Term) into the target js.AST.

  • Node Mapping: Map each relevant core.Term node to its equivalent js.AST node(s).
  • Type Mapping: Translate Chester types (including unions, records) to TypeScript types.
  • Effect Transformer: Handle effect annotations (potentially via comments or specific code structures).

3. Code Emitter

Transforms the js.AST into a valid TypeScript code string.

Current Status (as of YYYY-MM-DD)

  • The basic AST node definitions (js.AST.scala) exist in compiler/shared/src/main/scala/chester/targets/js/.
  • A placeholder backend object (Backend.scala) has been created in the same directory (chester.targets.js package). It expects chester.syntax.core.Term as input and contains basic transformation logic for some nodes, but needs refinement and completion.
  • The AST Transformer logic within Backend.scala is incomplete and requires verification against the actual core.Term structure.
  • The detailed Code Emitter logic has not yet been implemented.
  • The integration of this backend into the main compilation pipeline or test infrastructure needs to be done.

Challenges

  • Mapping Chester’s type system (including union types, structural types) to TypeScript’s type system.
  • Handling effects and ensuring the generated code respects them (perhaps via comments or specific function signatures).

5. Integration with Compiler Pipeline

  • Add a TypeScript target option to the compiler
  • Integrate the TypeScript backend with the existing compilation pipeline
  • Ensure proper error handling and reporting

Implementation Plan

  1. Phase 1: Basic TypeScript Generation

    • Implement basic AST transformation
    • Handle primitive types and simple functions
    • Generate valid TypeScript code without effects
  2. Phase 2: Advanced Type Features

    • Implement generics
    • Handle record types and interfaces
    • Support union and intersection types
  3. Phase 3: Effect System Integration

    • Implement effect type representation
    • Handle effect propagation in TypeScript
    • Ensure effect safety in generated code
  4. Phase 4: Optimization and Refinement

    • Optimize generated TypeScript code
    • Improve readability of output
    • Add source mapping for debugging

Example Transformation

Chester Input:

// Function with an effect
def print(message: String) : Unit / IO = ()

// Function that uses the effect
def hello() : Unit / IO = {
  print("Hello")
}

// Pure function (no effects)
def pure() : Integer = 123

TypeScript Output:

// Function with an effect
function print(message: string): IOEffect<void> {
  // Implementation
  return undefined as IOEffect<void>;
}

// Function that uses the effect
function hello(): IOEffect<void> {
  print("Hello");
  return undefined as IOEffect<void>;
}

// Pure function (no effects)
function pure(): number {
  return 123;
}

// Effect type definitions
type Effect = { readonly __effect: unique symbol };
type IOEffect<T> = T & { readonly __io: unique symbol };
type StateEffect<T> = T & { readonly __state: unique symbol };

Success Criteria

  • The TypeScript backend can generate valid TypeScript code from Chester programs
  • The generated TypeScript code maintains the type safety of the original Chester code
  • Effects are properly represented and checked in the TypeScript output
  • The TypeScript code is readable and follows TypeScript best practices

Union Types Implementation

NOTE THAT THIS DOCUMENT IS OUTDATED AS RELEVANT CODE IS BEING REWRITTEN

This document outlines the implementation and current state of union types in Chester.

Current State

Union types are now fully implemented in Chester with both parsing and type checking support. The implementation can handle all three key subtyping relationships: union-to-union, specific-to-union, and union-to-specific.

Implementation Overview

The following components have been implemented to support union types:

1. Parser & Desalting (Desalt.scala)

  • Union Type Parsing: Implemented in the desugar method to handle union type expressions:
    • Detects | operators in type expressions
    • Creates a UnionTypeExpr node with component types
    • Handles nested union types properly
    • Preserves source position information for error reporting
    • Respects operator precedence and grouping

The parser now correctly recognizes syntax like:

// Simple union type
function accept(value: Integer | String) { ... }

// Nested union types
function process(data: Integer | String | (Array | Object)) { ... }

// Union in type annotation
let value: Integer | String = "hello";

2. Type Elaboration (Elaborater.scala)

  • Union Type Elaboration: Implemented handling for union type expressions in the type checking system:

    • Elaborate each component type in the union
    • Create proper Union term with component types
    • Register appropriate propagators for type constraints
    • Maintain connections between union types and components
  • Union Type Unification: Implemented support for all three subtyping relationships:

    • Union-to-Union: (A|B) <: (C|D) with proper component compatibility checks
    • Specific-to-Union: A <: (B|C) for cases like passing Integer to a parameter of type Integer|String
    • Union-to-Specific: (A|B) <: C for returning a union from a function with specific return type
  • Type-Level Functions with Union Types: Added support for type-level functions that:

    • Return union types
    • Accept union types as arguments
    • Process union components correctly

The implementation enables code patterns like:

// Accepting a union type parameter
def process(value: Integer | String): String = {
  match value {
    case i: Integer => i.toString()
    case s: String => s
  }
}

// Returning a union type
def fetch(): Integer | String | Null = {
  if (hasData()) getData()
  else null
}

// Union types in generic contexts
def firstOrDefault[T](list: List[T], default: T): T = {
  if (isEmpty(list)) default else first(list)
}

3. Type Propagator (TyckPropagator.scala)

  • UnionOf Propagator Implementation: Implemented the UnionOf propagator to handle union type constraints:

    • Manages relationships between union types and their components
    • Enforces proper subtyping relationships with union types
    • Handles cell coverage for all union components
    • Ensures proper zonking of union types
  • Enhanced Type Compatibility for Union Types: Implemented union type compatibility with three key cases:

    1. Union-to-Union Compatibility:

      case (Union(types1, _), Union(types2, _)) => {
        // For union-to-union subtyping, we check component compatibility
        // with complex rules for determining when unions are subtypes of each other
      }
      
    2. Term-to-Union Compatibility:

      case (term, Union(types, _)) => {
        // For term-to-union subtyping, we check if the term is compatible
        // with any type in the union
      }
      
    3. Union-to-Term Compatibility:

      case (Union(types, _), term) => {
        // For union-to-term subtyping, we check if all types in the union
        // are compatible with the term
      }
      
  • Cell Coverage Implementation: Added comprehensive cell coverage mechanisms:

    • Direct connections between union types and their components
    • Self-coverage for component types
    • Enhanced zonking capabilities for union types
    • Prevention of early returns that could leave cells uncovered

4. Test Framework and Validation

  • Comprehensive Test Suite: Added a complete set of tests to validate union type functionality:

    • Basic union type syntax tests
    • Union type subtyping tests (all three relationship types)
    • Union type pattern matching tests
    • Function with union type parameters and return values
    • Cell coverage tests for union types
    • Edge cases and error handling tests
  • Test Files:

    • tests/tyck/simplest_union.chester: Basic union type functionality
    • tests/tyck/union-subtype.chester: Union subtyping relationships
    • tests/tyck/union-pattern-matching.chester: Pattern matching with union types
    • Various integration tests using union types with other language features

Current Status and Future Work

Completed

  1. ✅ Parser support for union type syntax
  2. ✅ Type elaboration for union types
  3. ✅ Full union type subtyping relationships
  4. ✅ Cell coverage mechanisms for union types
  5. ✅ Error reporting for union type mismatch errors
  6. ✅ Integration with trait types and interfaces
  7. ✅ Pattern matching with union types

Future Enhancements

  1. More comprehensive error messages for specific union type errors
  2. Performance optimizations for complex union types
  3. Enhanced type inference with union types
  4. Integration with effect system
  5. Compiler backend optimizations for union types

References

  • Main implementation files:

    • Desalt.scala (parser implementation)
    • Elaborater.scala (type checking implementation)
    • TyckPropagator.scala (propagator implementation)
    • Term.scala (union type representation)
  • Test files:

    • tests/tyck/simplest_union.chester
    • tests/tyck/union-subtype.chester
    • Other integration test files
  • Related documentation:

    • Type checking system documentation
    • Trait implementation documentation

Chester Guide

Record Syntax in Chester

Chester provides a concise and powerful syntax for defining records, which are similar to structs or classes in other languages. Records in Chester are immutable by default and provide a convenient way to group related data.

Basic Record Syntax

The basic syntax for defining a record in Chester is as follows:

record RecordName(field1: Type1, field2: Type2, ...) { ... }

Here’s a simple example of a Person record:

record Person(name: String, age: Int);

Using Records

Once defined, you can create instances of records and access their fields:

let alice = Person("Alice", 30);
println(alice.name);  // Outputs: Alice
println(alice.age);   // Outputs: 30

Statements

Scope of let and def

In Chester, let and def are used to declare bindings, but they differ in how they handle scoping and forward references. Understanding these differences is crucial for writing correct and efficient Chester programs.

let Bindings

  • Local Scope: let bindings are only visible after their declaration within the current block.
  • No Forward References: You cannot reference a let binding before it’s declared.
  • Type Inference: If no type annotation is provided, the compiler infers the type from the binding’s body.

Example:

// Correct usage of 'let'
let x = 5;
let y = x; // 'x' is defined before use
// Incorrect usage of 'let'
let y = x + 2; // Error: 'x' is not defined yet
let x = 5;

def Bindings

  • Global Scope: def bindings are visible throughout the entire block, even before their declaration.
  • Allows Forward References: You can reference a def binding before it’s declared.
  • Type Annotation Required for Forward References: If you use a def binding before its declaration, you must provide a type annotation.

Example:

// Correct usage of 'def' with type annotation
def y = square(5); // 'square' is used before its declaration

def square(n: Int) = n * n; // Type annotation for 'n' is required
// Incorrect usage of 'def' without type annotation
def y = increment(5); // 'increment' is used before its declaration

def increment(n) = n + 1; // Error: Missing type annotation for 'n'

Summary of Scoping Rules

  • let Bindings:

    • Visible only after their declaration within the current block.
    • Do not allow forward references.
    • Type annotations are optional if the type can be inferred.
  • def Bindings:

    • Visible throughout the entire block.
    • Allow forward references.
    • Require type annotations when used before their declarations.

Compiler Behavior

When processing a block, the Chester compiler handles let and def bindings differently to manage scope and type checking.

Processing def Bindings

  1. Collection Phase:

    • The compiler collects all def bindings, noting their names, type annotations, and identifiers.
    • It tracks forward references to detect usages before declarations.
  2. Type Annotation Checks:

    • For forward-referenced def bindings without type annotations, the compiler reports a MissingTypeAnnotationError.
  3. Context Updates:

    • The compiler adds placeholders or inferred types to the context, allowing forward-referenced def bindings to be used.

Processing let Bindings

  • Sequential Processing:
    • let bindings are processed in order of their appearance.
    • Each let binding is added to the context after its declaration.
  • No Forward References:
    • Referencing a let binding before its declaration results in an error.

Best Practices

  • Use let when you don’t need to reference the binding before its declaration.
  • Use def when you need forward references or are defining recursive functions.
  • Always provide type annotations for def bindings that are forward-referenced to avoid compilation errors.

By understanding these scoping rules, you can write more predictable and maintainable Chester code.

Chester Syntax Grammar (BNF-like)

IMPORTANT NOTE: This document provides only a rough description of the Chester language syntax and might be highly incorrect or not fully aligned with the current implementation. It should be used as a general guide rather than a precise specification.

This document attempts to describe the Chester language syntax using a BNF-like notation.

Note: Whitespace and comments (// ...) are generally allowed between tokens and are not explicitly shown in the grammar rules unless significant (like newlines in blocks).

Top Level

Toplevel ::= Statement* (Expr | ';')?

Statement ::= Expr (';' | Newline)

Note: In many contexts (like blocks or top-level), newlines can act as statement separators similar to semicolons. The last expression in a block acts as its return value unless followed by a semicolon.

Expressions

Expr ::= Atom | Expr Operator Expr | Operator Expr | Expr Operator

Atom ::= ID
       | Literal
       | Tuple
       | List
       | Block
       | Object
       | Application
       | MethodCall
       | SymbolLookup
       | Keyword

Tuple ::= '(' ExprList? ')'
List  ::= '[' ExprList? ']'
Block ::= '{' Statement* Expr? '}'

ExprList ::= Expr (',' Expr)* ','?

Application ::= GenericApplication | FunctionCall | BlockApplication | CombinedApplication
GenericApplication ::= Atom '[' ExprList ']'
FunctionCall ::= Atom '(' ExprList? ')'
BlockApplication ::= Atom Block
CombinedApplication ::= GenericApplication FunctionCall
                      | GenericApplication BlockApplication
                      | FunctionCall BlockApplication
                      | GenericApplication FunctionCall BlockApplication

MethodCall   ::= Atom '.' ID ('(' ExprList? ')')?  // Parentheses are optional for field access
Attribute ::= '@' ID // NOT IMPLEMENTED YET
Keyword      ::= '#' ID (GenericApplication | FunctionCall)*

Note: All identifiers are treated uniformly at the syntax level, with no distinction between different kinds of identifiers. Operator precedence and associativity are handled during semantic analysis, not in the grammar.

Object Literals

Object ::= '{' ObjectClauseList? '}'

ObjectClauseList ::= ObjectClause (',' ObjectClause)* ','?

ObjectClause ::= ObjectKey ('=' | '=>') Expr

ObjectKey ::= ID
            | QualifiedName
            | STRING_LITERAL
            | SYMBOL_LITERAL

QualifiedName ::= ID ('.' ID)*

Literals

Literal ::= INT_LITERAL
          | RAT_LITERAL
          | STRING_LITERAL
          | SYMBOL_LITERAL

SYMBOL_LITERAL ::= "'" ID

Terminals

ID             ::= /* Starts with letter, _, or emoji; contains letters, digits, _, emoji */
Operator       ::= /* Sequence of operator symbols like +, -, *, /, =, ->, =>, :, ., etc. */
INT_LITERAL    ::= /* Integer in decimal, hex (0x...), or binary (0b...) format */
RAT_LITERAL    ::= /* Rational/float literal (e.g., 1.2, 3.14e-5) */
STRING_LITERAL ::= /* Double-quoted string "..." with escapes */
Newline        ::= /* \n or \r\n */

Punctuation and Symbols (Terminals)

'(' | ')' | '{' | '}' | '[' | ']' | ',' | ';' | ':' | '.' | '=' | '=>' | '->' | '@' | '#'

Traits and Interfaces in Chester

Chester provides two distinct mechanisms for defining abstract types: traits and interfaces. While both serve to define contracts that other types can implement, they differ in their subtyping behavior and intended use cases.

Traits: Nominal Subtyping

Traits in Chester use nominal subtyping, which means that the name of the type is significant in determining subtype relationships.

Here’s an example of a trait definition in Chester:

trait Animal {
  def makeSound: String;
}

object Dog <: Animal {
  override def makeSound: String = "Woof!";
}

In this example, Dog is explicitly declared as a subtype of Animal using the <: operator. This relationship is based on the names of the types, not just their structure.

Interfaces: Structural Subtyping

Interfaces in Chester use structural subtyping, which means that subtype relationships are determined by the structure (methods and properties) of the types, regardless of their names.

Here’s an example of an interface definition:

interface Soundmaker {
  def makeSound: String;
}

object Cat {
  def makeSound: String = "Meow!";
}

// Cat is implicitly a Soundmaker because it has a matching structure
def soundmaker: Soundmaker = Cat;

In this case, Cat is considered a subtype of Soundmaker because it has a matching makeSound method, even though it wasn’t explicitly declared as such.

Design Inspiration

This dual approach to abstract types is inspired by the Pony programming language, as described in:

Steed, G., & Drossopoulou, S. (2016). A principled design of capabilities in Pony. URL: https://www.ponylang.io/media/papers/a_prinicipled_design_of_capabilities_in_pony.pdf

The Pony language uses a similar distinction between traits (which they call “interfaces”) for nominal subtyping and “structural types” for structural subtyping.

Choosing Between Traits and Interfaces

  • Use traits when you want to create a named hierarchy of types and control which types can be subtypes.
  • Use interfaces when you want to define a contract that any type can implicitly satisfy by implementing the required structure.

This design gives Chester developers flexibility in how they structure their code, allowing for both strict hierarchies and more flexible, duck-typed-style programming where appropriate.