LettreAI Documentation
  • Home
  • User Guide
  • Nutshell
  • Manual
  • Examples
  • API
  1. Nutshell
  2. AI-Task in a Nutshell
  • Getting Started
    • AI Task Documentation
    • Installation
    • Quick Start
  • User Guide
    • User Guide
  • Nutshell
    • AI-Task in a Nutshell
  • Manual
    • Core Concepts
    • Configuration
    • Instructions
    • Productions
    • Functions
    • LLM Integration - Claude
  • Examples
    • Basic Examples
    • Advanced Examples
    • Instruction Examples
    • Production Examples
  • Reports
    • AI Task Reports
  • API Reference
    • Pipeline API
    • Engine API
    • Functions API
  • Development
    • Contributing
    • Architecture

On this page

  • 1 AI-Task Package in a Nutshell
    • 1.1 1. Terminology
    • 1.2 2. Workflow/Pipelines of AI Tasks
    • 1.3 3. Executing AI Workflows
    • 1.4 4. Built-in Functions
      • 1.4.1 File Operations
      • 1.4.2 Content Processing
      • 1.4.3 Custom Functions
    • 1.5 5. Best Practices
  • Edit this page
  • Report an issue
  1. Nutshell
  2. AI-Task in a Nutshell

AI-Task in a Nutshell

1 AI-Task Package in a Nutshell

This concise reference serves as a bridge between the User Guide and the detailed Manual. It provides a comprehensive overview of the core concepts, terminology, and workflows within the AI-Task system. Use this section while working with the more detailed technical documentation.

1.1 1. Terminology

Core Terminology
  • Production {production}: The overall workflow with a name (e.g., “nepi”)
  • Profile profile_{{id}}: A specific source data instance, identified by profile_{{id}} (e.g., “profile_001”)
  • Performance {production}_{{id}}: A specific production output instance, identified by {production}_{{id}} (e.g., “nepi_001”)
  • Genre {genre}: Category of documents within a profile or performance (e.g., audio, transcription, slide, report)
  • Document: Individual file within a genre, with naming that can be customized based on project requirements
  • Partitur {partitur_name}.ai or {partitur_name}.yml: The workflow description file that defines the sequence of operations
Workflow Variables
  • All workflows use standardized variables denoted by double curly braces (e.g., {production}, {id})
  • These variables are canonical in all partitur descriptions
  • The {id} variable is obligatory as the essential index that uniquely identifies profiles and performances
  • Standard workflow variables:
    • {production}: The name of the production (e.g., “nepi”)
    • {id}: The numerical identifier of a profile/performance (e.g., “001”)
    • {genre}: The category of documents being processed (e.g., “audio”, “transcription”)
    • {no}: The sequential number of a document within its genre (e.g., “01”)
  • Variables can be referenced from any part of the system, ensuring consistency throughout the pipeline
# Example variable usage in workflow partitur
input_path: "/path/to/project/{{production}}/profile_{{id}}/audio/document_{{id}}.m4a"
output_path: "/path/to/project/{{production}}/{{production}}_{{id}}/transcription/{{production}}_{{id}}_transcription_01.txt"
Directory Structure
  • The standard project structure follows this pattern:
    • project_root/: Base directory containing the project
      • partitur/: Contains all partitur definition files
      • instruction/: Contains templates and prompts
      • function/: Contains custom functions
      • profile/: Contains all profile data organized by ID
      • diagnostic/: Contains logs and error tracking

Example:

project_root/
├── partitur/                         # Contains all partitur files
│   ├── workflow_name.yml             # Standard partitur file
│   └── specialized_workflow.ai       # Specialized partitur file
├── instruction/                      # Contains all template files
│   └── transcribe.j2                 # Template for transcription
├── function/                         # Contains custom functions
│   └── process_profile_document.py   # Custom function for document processing
├── profile/                          # Contains all profile data
│   ├── profile_001/                  # Profile instance with ID 001
│   │   ├── audio/
│   │   │   └── document_001.m4a      # Source audio file
│   │   ├── objektiv/                 # Output from objektiv generation
│   │   │   └── INQUA2_001_objektiv_01.txt
│   │   └── profile_document/         # Generated profile documents
│   │       └── INQUA2_001_profile.docx
│   └── profile_002/                  # Another profile instance
├── diagnostic/                       # Contains logs and diagnostics
    └── learning/                     # Contains learning artifacts
        ├── issue_analysis.yaml       # Issue tracking
        └── issue_resolution_guide.qmd # Guide for resolving issues

1.2 2. Workflow/Pipelines of AI Tasks

AI-Task workflows define the sequence of operations performed on source data to generate output. These workflows are specified in YAML files with a .yml or .ai suffix, called “partiturs” (from the musical term for a complete score).

Partitur Components

A typical partitur file follows this structure:

name: workflow_name
description: "Description of the workflow"

# Pipeline definition
pipe:
  - name: step_name
    type: llm | function
    # Type-specific parameters
    
    # For LLM tasks
    model: model_name
    tmpl: "Template for LLM task"
    input: input_source
    output: output_destination
    
    # For function tasks
    function: function_name
    params:
      # Parameters for the function
      param1: value1
      param2: value2

# Settings
settings:
  function_dir: "function"  # Directory for custom functions
  continue_on_error: false  # Whether to continue on error
  parallel: false           # Whether to run steps in parallel

1.3 3. Executing AI Workflows

Command-Line Interface Usage (ai-partitur)

CRITICAL: Working Directory Requirements

All ai-partitur commands MUST be executed from the project root directory to ensure proper path resolution.

# CORRECT: Run from project root directory
cd /path/to/project_root
ai-partitur partitur_name profile_id

# INCORRECT: Will cause path resolution errors
cd /path/to/project_root/partitur
ai-partitur partitur_name profile_id

Recommended Usage (Without –file Option)

The preferred way to use ai-partitur is without the --file option:

# Standard command format
ai-partitur <partitur_name> <profile_id>

# Examples
ai-partitur inqua_objektiv 30727          # Generate objektiv data
ai-partitur profile_document 34611        # Generate profile document

This approach automatically locates partitur files in the standard location (project_root/partitur/).

File-Specific Usage (Only When Necessary)

Use the --file option only for specialized partitur files not in the standard location:

# Only use when necessary
ai-partitur --file partitur/specialized_workflow.ai workflow_name profile_id

Processing Multiple Profiles

For batch processing:

# Process multiple profiles with a bash loop
for profile_id in 30727 34611 32101 31662 31771; do
    ai-partitur profile_document $profile_id
done
Troubleshooting Common Issues

If you encounter “File not found” errors:

  1. Verify you’re running from the project root directory
  2. Check that the partitur file exists in the expected location
  3. Ensure all paths in the partitur file are relative to the project root
  4. For function-based partitur files, verify the function_dir setting is correctly specified

If function execution fails:

  1. Try running the function directly from Python to debug
  2. Check that the function exists in the specified directory
  3. Verify all required files exist

Fallback Method

If ai-partitur consistently has path resolution issues, use the direct Python approach:

from function.process_profile_document import process_profile_document
process_profile_document({
    'profile_id': '30727',
    'template_file': 'instruction/profile_template.docx',
    'output_file': 'profile/profile_30727/profile_document/INQUA2_30727_profile.docx',
    'font_name': 'Calibri',
    'font_size': 11,
    'line_spacing': 1.5
})

1.4 4. Built-in Functions

AI-Task includes several built-in functions that can be used in workflows:

1.4.1 File Operations

  • aisource: Loads content from a file into the pipeline
  • airesult: Saves pipeline content to a file
  • convert_to_docx: Converts a text file to DOCX format

1.4.2 Content Processing

  • content: Extracts or processes text content
  • count_tags: Counts occurrences of specific tags

1.4.3 Custom Functions

You can add custom functions in the function/ directory of your project:

# function/process_profile_document.py
def process_profile_document(params):
    """Process a profile document with styling parameters."""
    # Implementation
    return True

Then reference them in your partitur:

pipe:
  - name: generate_profile_document
    type: function
    function: process_profile_document
    params:
      profile_id: "${profile_id}"
      template_file: "instruction/profile_template.docx"
      output_file: "profile/profile_${profile_id}/profile_document/INQUA2_${profile_id}_profile.docx"

1.5 5. Best Practices

  1. Working Directory: Always execute ai-partitur commands from the project root directory.

  2. Prefer Standard Command: Use the syntax ai-partitur partitur_name profile_id without the --file option whenever possible.

  3. Path Configuration: Ensure all paths in partitur files are relative to the project root directory.

  4. Function Settings: For function-based partitur files, always include the function_dir: "function" setting.

  5. Issue Resolution: Document any issues in the diagnostic/learning/raw_issues/ directory and track them in diagnostic/learning/issue_analysis.yaml.

  6. Consistent Variables: Use the standardized variables ({production}, {id}, {genre}, {no}) consistently throughout your workflows.

User Guide
Core Concepts

LettreAI Documentation

 
  • Edit this page
  • Report an issue
  • License: MIT