User Guide
A comprehensive introduction to AI Task for non-technical users
Welcome to AI Task
AI Task is a powerful yet user-friendly tool designed to help you harness the capabilities of AI models without requiring programming knowledge. This guide is specifically written for users without technical background or programming experience, providing a clear path to using AI Task effectively.
This User Guide is completely independent from the technical Manual. If you’re looking to use AI Task without needing to understand its technical implementation, you’re in the right place!
What is AI Task?
AI Task allows you to create and run AI-powered workflows through simple text files. You can process text, analyze content, generate summaries, and much more—all without writing code. The system uses instructions (previously called templates) to tell AI models exactly what you want them to do.
AI Task Organization Hierarchy
AI Task uses a clear organizational structure to manage your AI workflows:
- Productions: The highest-level containers that organize related workflows (like a theater production or music album)
- Pipelines: Sequences of processing steps that accomplish a specific workflow (like scenes in a play)
- Tasks: Individual operations performed by AI models or functions (like an actor’s performance)
Everything in AI Task is ultimately processed through a pipeline, even when you’re running a single task. A single task is simply a pipeline with one step.
Productions: Organizing Your AI Workflows
What is a Production?
Productions are the highest level of organization in AI Task. Think of a production like a theater play or a music album. While each performance or song is unique, they all belong to the same overall production.
In AI Task, a “production” is a collection of related pipelines that follow the same pattern but might process different inputs. Each execution is called a “performance” or “instance” of the production.
For example, if you’re analyzing customer profiles, your overall process would be a “Profiles” production, with each individual profile analysis being a separate instance within that production.
Production Directory Structure
Productions use a standardized directory structure to organize all their data:
project-home/
├── productions/
│ ├── profiles/ # Production name
│ │ ├── profiles_00001/ # Production instance
│ │ │ ├── input/ # Processing step
│ │ │ │ └── document_00001_raw.txt
│ │ │ ├── analysis/ # Processing step
│ │ │ │ └── document_00001_analyzed.json
│ │ │ └── summary/ # Processing step
│ │ │ └── document_00001_summary.md
│ │ ├── profiles_00002/ # Another instance
│ │ │ ├── input/
│ │ │ ├── analysis/
│ │ │ └── summary/
│ │ └── settings/ # Production settings
│ │ ├── production_config.yml
│ │ └── production_db.csv
│ └── reports/ # Another production
│ ├── reports_00001/
│ ├── reports_00002/
│ └── settings/
This structure includes:
- Project Home: The main directory for all your AI Task projects
- Productions: A directory containing all productions
- Production Directory: Named after the production (e.g., “profiles”)
- Instance Directories: Each individual instance has its own directory with a numeric ID
- Step Directories: Within each instance, steps organize different stages of processing
- Document Files: Follow a consistent pattern like “document_[ID]_[TYPE].[SUFFIX]”
- Settings Directory: Contains configuration and tracking information
Why Use Productions?
Productions are incredibly helpful when you need to:
- Organize related work: Keep all pipelines that serve a similar purpose together
- Process many similar items: Like analyzing hundreds of customer reviews
- Keep everything organized: Each item gets its own folder with a unique ID
- Track progress: The settings folder keeps records of everything
- Compare results: Easily find and compare outputs from different runs
- Find specific documents: Search for all results of a certain type across instances
Creating Your First Production
To set up a production, you’ll need to:
Create a configuration file that defines:
- The production name
- Directory structure
- Processing steps
- Document naming patterns
Initialize the production structure:
aitask --production-config profiles.yml --setupRun pipelines within the production:
aitask analyze_profile.ai "Customer data..." --production-config profiles.yml --production-new --document-type analyzed --step analysis
Working with Production Instances
Each instance (like “profiles_00001”) contains a complete set of inputs and outputs for a single run. You can:
- Create new instances:
--production-new - Use existing instances:
--production-instance 00001 - List all instances:
--production-list - Search for specific documents by type:
--find-document-type summary
Document Organization
Documents in a production follow a consistent naming pattern:
document_00001_raw.txt
│ │ │
│ │ └─ File extension
│ └──── Document type
└─────────── Instance ID
This makes it easy to: - Find all documents from a specific instance - Find all documents of a certain type - Organize related documents in meaningful steps
Pipelines: Processing Your Data
What is a Pipeline?
A pipeline is a sequence of steps that process your data in order. Each step takes input, performs an operation, and passes its output to the next step.
Remember: In AI Task, even a single task is handled as a pipeline with just one step. This unified approach makes it easy to extend your workflows later.
Pipeline Structure
Pipelines are defined in .ai files using a simple structure:
name: "My Pipeline"
description: "Analyzes and summarizes text"
pipe:
- type: llm # First step (task)
name: "analyze"
tmpl: |
Analyze this text: {{pipe.pipein-text}}
model: gemini-2.0-flash
- type: llm # Second step (task)
name: "summarize"
tmpl: |
Summarize this analysis: {{pipe.pipeout-text}}
model: gemini-2.0-flashThis pipeline has two tasks: first it analyzes text, then it summarizes the analysis.
Tasks: Individual Processing Steps
What is a Task?
A task is a single operation within a pipeline. It can be:
- An AI model processing (type: llm)
- A function that manipulates data (type: function)
Tasks always take input and produce output. They can use instructions to tell AI models what to do.
Simple Task Example
Here’s a simple task that summarizes text:
pipe:
- type: llm
tmpl: |
Summarize the following text in a concise manner:
{{pipe.pipein-text}}
model: gemini-2.0-flashEven though this is just one task, AI Task treats it as a pipeline with a single step.
Instructions: Guiding the AI
Instructions (formerly templates) are the heart of AI Task. They provide the AI model with clear directions on what you want it to do.
What Are Instructions?
Instructions are text patterns that: - Tell the AI what task to perform - Include placeholders for your input data - Can adapt based on variables you provide
Simple Instructions
The simplest instructions are written directly in your task:
pipe:
- type: llm
tmpl: |
Analyze the sentiment of this text:
{{pipe.pipein-text}}
Rate it as positive, negative, or neutral.
model: gemini-2.0-flashReusable Instructions
For tasks you perform frequently, you can create reusable instruction files (with .j2 extension):
{# translate.j2 #}
Translate the following text from {{ source_language|default('English') }} to {{ target_language }}:
{{ pipe.pipein-text }}
Then reference them in your task:
pipe:
- type: llm
tmpl_file: translate.j2
vars:
target_language: "French"
model: gemini-2.0-flashInstruction Variables
Instructions become powerful when you add variables:
Summarize the following text in {{ style|default('concise') }} style.
{% if focus %}Focus on aspects related to {{ focus }}.{% endif %}
{% if max_words %}Keep the summary under {{ max_words }} words.{% endif %}
{{ pipe.pipein-text }}
You can then customize the behavior:
pipe:
- type: llm
tmpl_file: summarize.j2
vars:
style: "detailed"
focus: "environmental impact"
max_words: 200
model: gemini-2.0-flashFile Organisation
AI Task uses a simple file structure:
- Task files (
.ai): Define your pipelines (even for single tasks) - Instruction files (
.j2): Reusable instructions for common tasks - Input files: Text or data you want to process
- Output files: Where results are saved
- Production directories: Where all related instances are stored
Pipeline Files (.ai)
Pipeline files use YAML format (don’t worry, it’s very simple) and have the .ai extension. They define:
- The pipeline structure (sequence of tasks)
- What instructions to use
- Which AI model to use for each step
- Any special settings or variables
- Optional production configuration
Example:
name: "Translation Pipeline"
description: "Translates text from English to German"
production:
name: "translations"
home: "./productions"
id_format: "{name}_{id:05d}"
pipe:
- type: llm
tmpl_file: translate.j2
vars:
target_language: "German"
model: gemini-2.0-flashCreating Comprehensive Instructions
For more complex tasks, you can create detailed instructions that guide the AI precisely:
Example: Content Analysis Instruction
{# analysis.j2 #}
Analyze the following {{ content_type|default('text') }} and provide insights on:
1. Main themes and topics
2. Tone and sentiment
3. Key arguments or points
{% if identify_audience %}4. Target audience{% endif %}
{% if extract_quotes %}5. Notable quotes or statements{% endif %}
{{ pipe.pipein-text }}
Format your analysis as {{ format|default('paragraphs') }}.
Example: Data Extraction Instruction
{# extract.j2 #}
From the following text, extract all {{ entity_type|default('dates') }}.
{{ pipe.pipein-text }}
Format the extracted information as a {{ format|default('list') }}.
Building Complex Workflows
AI Task allows you to chain multiple steps together in a pipeline:
# analyze_and_summarize.ai
pipe:
- type: llm
name: "analysis"
tmpl_file: analysis.j2
vars:
content_type: "article"
identify_audience: true
model: gemini-2.0-flash
- type: llm
name: "summary"
tmpl: |
Based on this analysis:
{{pipe.pipeout-text}}
Create a concise executive summary in bullet points.
model: gemini-2.0-flashThis pipeline first analyzes the content, then takes that analysis and creates a summary from it.
Practical Examples
Let’s look at some real-world examples of how you might use AI Task:
Example 1: Meeting Notes Processor
# meeting_notes.ai
pipe:
- type: llm
tmpl: |
From these meeting notes:
{{pipe.pipein-text}}
Extract:
1. Key decisions made
2. Action items with assigned persons
3. Important deadlines
4. Topics for next meeting
Format as a structured list with clear headings.
model: gemini-2.0-flashExample 2: Research Assistant
# research.ai
pipe:
- type: llm
tmpl: |
Analyze this research abstract:
{{pipe.pipein-text}}
Provide:
1. Main research question
2. Methodology used
3. Key findings
4. Limitations mentioned
5. Potential applications
model: gemini-2.0-flashExample 3: Content Repurposing
# repurpose.ai
pipe:
- type: llm
tmpl: |
Transform this blog post:
{{pipe.pipein-text}}
Into a script for a 2-minute social media video.
Include:
- Attention-grabbing opening
- 3-4 key points
- Call to action
model: gemini-2.0-flashTips for Creating Effective Instructions
- Be specific: Clearly state what you want the AI to do
- Provide context: Help the AI understand the purpose of the task
- Structure the output: Specify how you want the results formatted
- Use variables: Make your instructions flexible and reusable
- Start simple: Begin with basic instructions and add complexity as needed
Troubleshooting
If you’re not getting the results you expect:
- Check your instruction: Is it clear and specific?
- Try a different model: Some models perform better for certain tasks
- Break complex tasks: Split complicated tasks into smaller steps
- Add examples: Include examples of what you expect in your instruction
Advanced Features
As you become more comfortable with AI Task, you can explore:
- Custom functions: Process data before or after AI processing
- File handling: Process multiple files in batch
- Variable files: Store settings in separate JSON files
- Output formatting: Save results in different formats
Conclusion
AI Task empowers you to leverage AI capabilities without programming knowledge. By understanding the organization hierarchy (Productions → Pipelines → Tasks) and creating clear instructions, you can automate content analysis, generation, and transformation for a wide range of applications.
Remember that the quality of your results depends largely on how well you craft your instructions and organize your workflows. Take time to set up proper productions for related work, and don’t hesitate to experiment with different approaches to find what works best for your specific needs.