AI-Task in a Nutshell
1 AI-Task Package in a Nutshell
This concise reference serves as a bridge between the User Guide and the detailed Manual. It provides a comprehensive overview of the core concepts, terminology, and workflows within the AI-Task system. Use this section while working with the more detailed technical documentation.
1.1 1. Terminology
- Production
{production}: The overall workflow with a name (e.g., “nepi”) - Profile
profile_{{id}}: A specific source data instance, identified byprofile_{{id}}(e.g., “profile_001”) - Performance
{production}_{{id}}: A specific production output instance, identified by{production}_{{id}}(e.g., “nepi_001”) - Genre
{genre}: Category of documents within a profile or performance (e.g., audio, transcription, slide, report) - Document: Individual file within a genre, with naming that can be customized based on project requirements
- Partitur
{partitur_name}.aior{partitur_name}.yml: The workflow description file that defines the sequence of operations
- All workflows use standardized variables denoted by double curly braces (e.g.,
{production},{id}) - These variables are canonical in all partitur descriptions
- The
{id}variable is obligatory as the essential index that uniquely identifies profiles and performances - Standard workflow variables:
{production}: The name of the production (e.g., “nepi”){id}: The numerical identifier of a profile/performance (e.g., “001”){genre}: The category of documents being processed (e.g., “audio”, “transcription”){no}: The sequential number of a document within its genre (e.g., “01”)
- Variables can be referenced from any part of the system, ensuring consistency throughout the pipeline
# Example variable usage in workflow partitur
input_path: "/path/to/project/{{production}}/profile_{{id}}/audio/document_{{id}}.m4a"
output_path: "/path/to/project/{{production}}/{{production}}_{{id}}/transcription/{{production}}_{{id}}_transcription_01.txt"- The standard project structure follows this pattern:
project_root/: Base directory containing the projectpartitur/: Contains all partitur definition filesinstruction/: Contains templates and promptsfunction/: Contains custom functionsprofile/: Contains all profile data organized by IDdiagnostic/: Contains logs and error tracking
Example:
project_root/
├── partitur/ # Contains all partitur files
│ ├── workflow_name.yml # Standard partitur file
│ └── specialized_workflow.ai # Specialized partitur file
├── instruction/ # Contains all template files
│ └── transcribe.j2 # Template for transcription
├── function/ # Contains custom functions
│ └── process_profile_document.py # Custom function for document processing
├── profile/ # Contains all profile data
│ ├── profile_001/ # Profile instance with ID 001
│ │ ├── audio/
│ │ │ └── document_001.m4a # Source audio file
│ │ ├── objektiv/ # Output from objektiv generation
│ │ │ └── INQUA2_001_objektiv_01.txt
│ │ └── profile_document/ # Generated profile documents
│ │ └── INQUA2_001_profile.docx
│ └── profile_002/ # Another profile instance
├── diagnostic/ # Contains logs and diagnostics
└── learning/ # Contains learning artifacts
├── issue_analysis.yaml # Issue tracking
└── issue_resolution_guide.qmd # Guide for resolving issues
1.2 2. Workflow/Pipelines of AI Tasks
AI-Task workflows define the sequence of operations performed on source data to generate output. These workflows are specified in YAML files with a .yml or .ai suffix, called “partiturs” (from the musical term for a complete score).
A typical partitur file follows this structure:
name: workflow_name
description: "Description of the workflow"
# Pipeline definition
pipe:
- name: step_name
type: llm | function
# Type-specific parameters
# For LLM tasks
model: model_name
tmpl: "Template for LLM task"
input: input_source
output: output_destination
# For function tasks
function: function_name
params:
# Parameters for the function
param1: value1
param2: value2
# Settings
settings:
function_dir: "function" # Directory for custom functions
continue_on_error: false # Whether to continue on error
parallel: false # Whether to run steps in parallel1.3 3. Executing AI Workflows
CRITICAL: Working Directory Requirements
All ai-partitur commands MUST be executed from the project root directory to ensure proper path resolution.
# CORRECT: Run from project root directory
cd /path/to/project_root
ai-partitur partitur_name profile_id
# INCORRECT: Will cause path resolution errors
cd /path/to/project_root/partitur
ai-partitur partitur_name profile_idRecommended Usage (Without –file Option)
The preferred way to use ai-partitur is without the --file option:
# Standard command format
ai-partitur <partitur_name> <profile_id>
# Examples
ai-partitur inqua_objektiv 30727 # Generate objektiv data
ai-partitur profile_document 34611 # Generate profile documentThis approach automatically locates partitur files in the standard location (project_root/partitur/).
File-Specific Usage (Only When Necessary)
Use the --file option only for specialized partitur files not in the standard location:
# Only use when necessary
ai-partitur --file partitur/specialized_workflow.ai workflow_name profile_idProcessing Multiple Profiles
For batch processing:
# Process multiple profiles with a bash loop
for profile_id in 30727 34611 32101 31662 31771; do
ai-partitur profile_document $profile_id
doneIf you encounter “File not found” errors:
- Verify you’re running from the project root directory
- Check that the partitur file exists in the expected location
- Ensure all paths in the partitur file are relative to the project root
- For function-based partitur files, verify the
function_dirsetting is correctly specified
If function execution fails:
- Try running the function directly from Python to debug
- Check that the function exists in the specified directory
- Verify all required files exist
Fallback Method
If ai-partitur consistently has path resolution issues, use the direct Python approach:
from function.process_profile_document import process_profile_document
process_profile_document({
'profile_id': '30727',
'template_file': 'instruction/profile_template.docx',
'output_file': 'profile/profile_30727/profile_document/INQUA2_30727_profile.docx',
'font_name': 'Calibri',
'font_size': 11,
'line_spacing': 1.5
})1.4 4. Built-in Functions
AI-Task includes several built-in functions that can be used in workflows:
1.4.1 File Operations
aisource: Loads content from a file into the pipelineairesult: Saves pipeline content to a fileconvert_to_docx: Converts a text file to DOCX format
1.4.2 Content Processing
content: Extracts or processes text contentcount_tags: Counts occurrences of specific tags
1.4.3 Custom Functions
You can add custom functions in the function/ directory of your project:
# function/process_profile_document.py
def process_profile_document(params):
"""Process a profile document with styling parameters."""
# Implementation
return TrueThen reference them in your partitur:
pipe:
- name: generate_profile_document
type: function
function: process_profile_document
params:
profile_id: "${profile_id}"
template_file: "instruction/profile_template.docx"
output_file: "profile/profile_${profile_id}/profile_document/INQUA2_${profile_id}_profile.docx"1.5 5. Best Practices
Working Directory: Always execute ai-partitur commands from the project root directory.
Prefer Standard Command: Use the syntax
ai-partitur partitur_name profile_idwithout the--fileoption whenever possible.Path Configuration: Ensure all paths in partitur files are relative to the project root directory.
Function Settings: For function-based partitur files, always include the
function_dir: "function"setting.Issue Resolution: Document any issues in the
diagnostic/learning/raw_issues/directory and track them indiagnostic/learning/issue_analysis.yaml.Consistent Variables: Use the standardized variables (
{production},{id},{genre},{no}) consistently throughout your workflows.