OmniLink Documentation

Full documentation

Single-page reference that mirrors every standalone section.

1. Introduction

Video placeholder for 1. Introduction.
Embed a YouTube iframe here when you're ready by replacing this block or attaching a video element.

Think of OmniLink as a universal translator between humans and machines. It turns your words—whether spoken, typed, or sent through an API—into structured, typed commands that machines can reliably understand. The result is a single, consistent control surface for robotics, IoT, industrial systems, smart homes, or any machine in general.

Core Features

  • Deep understanding of LLMs: Commands are generated not only based on what you say, but based on what you mean and the context of your words.
  • Local & Remote communication: Work offline on a local broker or connect via HTTP for remote control. Keep things private when needed, or reach your systems from anywhere in the world.
  • Context & feedback: The agent always has a sense of environment state, and confirms whether commands executed successfully.
  • Cross-device support: Configure once and connect with the same agent from any device.

OmniLink powers everything from robot fleets and smart buildings to educational kits, dashboards, and service applications—any system that benefits from natural, human-friendly commands.

2. Quick Overview

Video placeholder for 2. Quick Overview.
Swap in your YouTube embed to walk through the product experience visually.

After registration, OmniLink opens in edit mode so you can set up content, integrations, and appearance. Use the side menu to navigate between configuration, billing, and support tools while you tailor the assistant.

The dashboard keeps the essentials in easy reach: the left rail holds navigation for settings, appearance, and deployment tools, while the main canvas previews the active assistant experience. A compact activity drawer along the right edge shows conversation transcripts, emitted commands, and connection health so you can validate behaviour without leaving the page.

Under the hood, the same workspace connects directly to the omnilink-lib Python package. The library hosts the OmniLink engine, MQTT bridge, and TCP adapters so your agent can speak to industrial controllers, smart home hubs, or any custom system. Install it with pip install omnilink-lib, point it at your deployment, and the dashboard immediately reflects the intents, feedback, and telemetry that your Python processes publish.

2.1 Configuration

The Configuration window centralizes every control for voice, prompting, and context so you can shape the assistant before going live. Each setting saves directly to the active agent profile.

  • Spoken Language: Choose the language your agent listens and speaks in. English is available today, with additional languages arriving soon.
  • Voice: Pick from the available text-to-speech voices. The selector mirrors what you hear in the deployment view, making it easy to preview tone and pacing.
  • AI Engine: Select the model powering the assistant. Use this to swap between OmniLink’s bundled models or your own connected engines without leaving the page.
  • Main Task: Provide a concise description of what the agent should accomplish. This prompt forms the foundation for every response.
  • Available Commands: List the structured command templates the agent is allowed to emit. Each template appears on its own line so operators can audit them at a glance.
  • User & Agent Identity: Configure User Name, Agent Name, and Agent Personality fields to tune the relationship between the operator and the assistant.
  • Custom Instructions: Add supplemental guidance or operating procedures. These instructions sit alongside the main task to ensure nuanced behaviour.
  • Allow code responses: Toggle whether the assistant can output responses that begin with Code:. Enable this when you want executable snippets in the transcript, disable it to keep replies conversational.
  • Agent Knowledge: Upload reference documents—PDFs, Word files, spreadsheets, code, and more—to enrich the agent’s prompt context. After uploading you’ll see the active filename, import timestamp, and a text preview with the option to remove it.
  • Short-term Memory Limit: Set how many prior turns the model remembers when crafting new responses. Lower values keep prompts lean; higher values preserve context for complex tasks.
  • Time to wait after silence: Control how long the agent waits before ending a listening session once speech stops. Adjust this to match your environment’s pacing.
  • Status & actions: Inline success or error banners confirm when data saves, and the Save/Close buttons let you commit changes or exit without modifying the active configuration.

2.2 Appearance

Tune the on-screen experience with a live picker for the 3D sphere color and a toggle that shows or hides the agent name.

2.3 Modes

  • Pick your agent's input and output: voice or text.
  • Turn on the interrupt feature to cut off speech just by talking.
  • Switch between click-to-talk, continuous listening, or wake-word activation.
  • Show or hide the debug window when you need to troubleshoot audio or connectivity.

2.4 Plans

Plan cards for Free, Silver, Gold, and Diamond outline pricing, billing cadence, and monthly credit limits, and the plan you’re on is marked with a “Current plan” pill.

2.5 Usage

See exactly how you’re using OmniLink:

  • Credit breakdown for listening, speaking, and AI processing.
  • Trend chart of total credit usage with 7/30/90-day toggles.
  • Trends and insights to spot patterns instantly.

2.6 Connection

One panel to manage all connectivity:

  • Local MQTT broker status and topic credentials.
  • Context activity and last-published payloads.
  • Remote sync history with keys, last commands, and stored responses.

2.7 Sign Out

Signs you out from the agent on the current device only. Other agents stay unaffected.

2.8 Staying on Top of Limits

If you run out of monthly credits, OmniLink shows a clear lockout screen with two options: Upgrade or Sign Out. Renewal resets everything.

3. Use Cases

Video placeholder for 3. Use Cases.
Showcase real workflows here with your YouTube walkthrough.

OmniLink adapts to environments ranging from individual homes to complex industrial floors. Below are two flagship scenarios that show how natural language control becomes reliable, safe action.

Smart homes that anticipate your needs

Connect lighting, climate, security, and media devices through OmniLink so residents can speak naturally—“dim the hallway lights to 40% until 10pm” or “secure the downstairs doors and arm the night mode.” The agent parses intent, maps it to device-specific commands, and keeps the household updated with confirmations or follow-up questions.

  • Blend multiple vendors by translating one request into the exact APIs each device expects.
  • Trigger routines based on conversation context, occupancy sensors, or external events.
  • Maintain privacy by running the command parsing locally while synchronizing remote dashboards when desired.

Robot fleets that understand plain language

Deploy OmniLink between operators and mobile or stationary robots to convert instructions like “send rover three to aisle seven and scan for spills” or “have the manipulator pack the red crates” into structured mission plans. Feedback channels relay status, sensor context, and safety interlocks back through the same agent.

  • Coordinate task assignments across multiple robots while respecting capability and battery constraints.
  • Route telemetry into the context system so the agent can adjust follow-up commands without human intervention.
  • Integrate with MQTT, ROS, or custom control buses through OmniLink’s transport bridges.

4. Quickstart

Video placeholder for 4. Quickstart.
Guide new builders with a concise setup video in this slot.

Let’s build your first agent

Go to Settings → Main Task and paste the following:

You are an agent who helps customers choose the best country to visit and arranges their travel based on their preferred mode of transportation.

Then add your first Available Commands:

book_flight_ticket_to_[country],
book_train_ticket_to_[country]

Other settings to configure:

  • User Name: none
  • Agent Name: Sara
  • Agent Personality: friendly, professional
  • Short-term Memory Limit: 10
  • Silence Timeout: 3

Save your changes. That’s it! You’ve created your first agent.

Now click on the sphere to talk, or go to the Modes menu to choose how you want to interact (click-to-talk or continuous listening). Ask the agent anything you like and watch how it responds.


Plan limits to check early

Before you build a complex workflow, confirm the limits for your plan so you don’t hit them mid-iteration.

  • Agent profiles: how many agents you can create and keep active.
  • Knowledge size: file size and chunking limits for deep knowledge uploads.
  • Tool usage: any caps on tool calls or concurrent sessions.
  • Retention: how long conversation history and logs are stored.

Review the Plans and Usage pages for the latest limits, and keep the Usage Management docs handy while you scale.


Debugging Window

In edit mode, you’ll see a movable debug window. It shows:

  • What you said (transcript)
  • The command sent to the API
  • The agent’s spoken response

Use this stage to test commands before connecting the agent to real systems. If the agent isn’t producing the commands you expect, just adjust the Settings until it does. More detail is provided in the Agent Configuration section.


Linking to Real Systems

An agent becomes powerful once it can act on real APIs. Here’s how to link:

System Requirements:

  • Python 3.9+ (3.10+ recommended)
  • Git
  • MQTT broker (e.g. Mosquitto) with WebSockets enabled on port 9001
  • OmniLink Python library
  • VS Code (or your preferred editor)

Installation:

# Clone the example repo
git clone https://github.com/omni-link-tech/omni-link-travel-agent.git

cd omni-link-travel-agent

Inside the repo you’ll find two key files:

  • travel_api.py → the API with travel functions.
  • link.py → connects your API with the OmniLink agent via the OmniLink library.

Open link.py and run it in VS Code. As you speak to the agent, you’ll see the parsed commands printed in the terminal.

If you uncomment the book_flight_ticket and book_train_ticket functions and run travel_api.py, the system will actually search Google for flights or trains to the country you mention.

And just like that, your first OmniLink project is live!

5. Architecture

Explore the OmniLink stack through the interactive diagram below. Click each component to see how agents, the OmniLink engine, transport bridges, and targets exchange commands, context, and feedback.

Tool & code execution surfaces

OmniLink separates conversational responses from tool execution. Use the matrix below to decide where code should run and how to pass results back to the agent.

Surface What runs here Networking File access Persistence
Chat response LLM text + UI rendering (no code execution) None None Session text only
Tools Connector runtime (Python/JS) running your handlers Allowed by your host Allowed by your host Depends on runtime process
Connector bridge MQTT/REST/WebSocket adapters Outbound to brokers/APIs Local logs/config Process lifetime
Browser UI Settings, debug panels, playback Limited to browser policies Local storage only Clears on refresh

Always return tool output to the agent via acknowledgements or feedback messages so the conversation can incorporate real execution results.

6. Agent Configuration

This section covers how to customize your agent.

You are a hotel customer service agent who manages reservations and answers customer questions.
book_room_number_1

But if you have 100 rooms, creating 100 commands is impractical. Instead, use variables inside square brackets:

book_room_number_[number]

Anything inside [] will be treated by the agent as a variable. You just have to describe the type of variable.
Example:

turn_on_light_in_[room]

Memory model

OmniLink stores conversation history in short-term memory (the recent turns you configure) and blends it with the Main Task, Custom Instructions, and retrieved Knowledge. Short-term memory resets when you clear it manually or when the session ends.

Knowledge authoring guide

7. Command Parsing

Command parsing is the step where OmniLink converts a free-form utterance into a normalized payload that downstream systems can understand. You describe the shape of a command with a template, and the engine extracts the variables that appear between square brackets.

Here is a simple template for moving chess pieces:

move_[color]_[piece]_from_[location1]_to_[location2]

If you say: “Move the white knight from a2 to a3”, the engine extracts:

{
  "template": "move_[color]_[piece]_from_[location1]_to_[location2]",
  "vars": {
    "color": "white",
    "piece": "knight",
    "location1": "a2",
    "location2": "a3"
  }
}

This structured payload is then published to your chosen transport.

Command payload contract (recommended)

OmniLink emits commands as structured JSON. We recommend standardizing the envelope so every connector and runtime can validate it consistently.

Command schema:

{
  "$schema": "https://json-schema.org/draft/2020-12/schema",
  "title": "OmniLink Command Envelope",
  "type": "object",
  "required": ["type", "name", "id", "timestamp"],
  "properties": {
    "type": { "const": "command" },
    "name": { "type": "string", "description": "Normalized command name" },
    "id": { "type": "string", "description": "Unique command identifier" },
    "timestamp": { "type": "string", "format": "date-time" },
    "args": { "type": "object", "additionalProperties": true },
    "correlationId": { "type": "string" },
    "source": { "type": "string", "description": "Agent or connector label" },
    "retries": { "type": "integer", "minimum": 0 },
    "timeoutMs": { "type": "integer", "minimum": 0 },
    "metadata": { "type": "object", "additionalProperties": true }
  }
}

ACK/response schema:

{
  "$schema": "https://json-schema.org/draft/2020-12/schema",
  "title": "OmniLink Command Ack",
  "type": "object",
  "required": ["type", "commandId", "timestamp", "ok"],
  "properties": {
    "type": { "const": "ack" },
    "commandId": { "type": "string" },
    "timestamp": { "type": "string", "format": "date-time" },
    "ok": { "type": "boolean" },
    "result": { "type": ["object", "array", "string", "number", "boolean", "null"] },
    "error": { "type": "string" },
    "retryable": { "type": "boolean" },
    "attempt": { "type": "integer", "minimum": 1 }
  }
}

Required vs optional fields

Limits & validation

Examples

Simple command:

{
  "type": "command",
  "name": "pause_simulation",
  "id": "cmd-2204",
  "timestamp": "2024-04-20T16:33:20.120Z"
}

Command with arguments:

{
  "type": "command",
  "name": "move_player",
  "id": "cmd-2205",
  "timestamp": "2024-04-20T16:33:25.120Z",
  "args": { "direction": "east", "steps": 3 }
}

Structured JSON result:

{
  "type": "ack",
  "commandId": "cmd-2205",
  "timestamp": "2024-04-20T16:33:25.440Z",
  "ok": true,
  "result": { "newPosition": { "x": 15, "y": 4 }, "energy": 87 }
}

Retries + error handling:

{
  "type": "ack",
  "commandId": "cmd-2205",
  "timestamp": "2024-04-20T16:33:25.980Z",
  "ok": false,
  "error": "Movement blocked by obstacle",
  "retryable": true,
  "attempt": 2
}

Common errors

Templates can be as expressive as you need. The parser recognizes separator characters (underscores, hyphens, whitespace) and uses them to match the relevant segments. You can also add aliases for natural language variations through the <synonyms> attribute in the configuration UI.

Multiple Commands, One Template

A single template can capture a family of commands. For example, the following template handles several thermostat actions:

thermostat_[action]_to_[temperature]_degrees

Sample utterances and the extracted payloads:

// "Thermostat set to 72 degrees"
{
  "template": "thermostat_[action]_to_[temperature]_degrees",
  "vars": {
    "action": "set",
    "temperature": "72"
  }
}

// "Thermostat boost to 25 degrees"
{
  "template": "thermostat_[action]_to_[temperature]_degrees",
  "vars": {
    "action": "boost",
    "temperature": "25"
  }
}

Because both commands share the same structure, you can implement downstream logic that switches on the action variable without redefining multiple templates.

Optional Segments and Defaults

Add optional suffixes by chaining multiple templates. For example, you might support both open_[door] and open_[door]_for_[duration]_seconds. When the shorter version is triggered, your connector can provide a default duration before the door automatically closes.

Defaults can also be injected by the transport bridge. If a user omits a variable, use your connector to supply a fallback value so that the receiving device always receives a complete payload.

Using Synonyms

Natural language is messy. Pair your templates with synonym tables to map phrases such as “turn on”, “enable”, or “activate” to the same [action] variable. Here is a short example:

light_[action]_in_[room]
// With synonyms: enable → on, activate → on, kill → off
// "Activate the lights in the kitchen"
{
  "template": "light_[action]_in_[room]",
  "vars": {
    "action": "on",
    "room": "kitchen"
  }
}

The parser resolves the synonym before delivering the payload, which keeps your downstream logic consistent regardless of how the user phrased the request.

Validation Tips

By combining expressive templates, synonym mappings, and validation, you can cover the majority of natural language variations while keeping the command contract deterministic for any automation stack.

8. Agent Connection

Video placeholder for 8. Agent Connection.
Demonstrate connection workflows and networking setups with an embedded video.

OmniLink communicates with your automation stack through a unified connection layer. Regardless of the transport you choose, the agent always emits the same contract: command → feedback → context. This section explains how to take advantage of the connection system in both local experiments and remote deployments.

Local vs. Remote Connection

Local Connection

Remote Connection

MQTT Communication

MQTT is the recommended transport for mission-critical automations because it supports pub/sub semantics and lightweight payloads. OmniLink uses the following topics by default:

Configure Quality of Service (QoS) according to your needs. QoS 1 ensures that every command is delivered at least once, while QoS 2 guarantees exactly-once delivery for sensitive actions. Retained messages on olink/context help late subscribers catch up with the latest status.

Setting Up Mosquitto

  1. Install Mosquitto: use your package manager (sudo apt install mosquitto mosquitto-clients) or Docker (docker run -p 1883:1883 -p 9001:9001 eclipse-mosquitto).
  2. Enable WebSockets: update /etc/mosquitto/conf.d/olink.conf (or the Docker-mounted configuration) with:
    listener 1883 0.0.0.0
    protocol mqtt
    
    listener 9001 0.0.0.0
    protocol websockets
    
    allow_anonymous false
    password_file /etc/mosquitto/passwords
                
  3. Create credentials: sudo mosquitto_passwd -c /etc/mosquitto/passwords omni-link, then restart the broker (sudo systemctl restart mosquitto or restart the container).
  4. Verify connectivity: publish and subscribe using mosquitto_pub and mosquitto_sub or connect through the OmniLink UI.

For remote deployments, secure Mosquitto further with TLS certificates, unique user accounts per service, and network firewalls.

Connector recipes (WebSocket, MQTT, REST)

Pick one transport and keep the envelope consistent: statecommandack/error. The agent only needs a clean loop and reliable acknowledgements.

Minimal WebSocket relay

// pseudo-code
const ws = new WebSocket("wss://game.example.com");

ws.onmessage = ({ data }) => {
  const message = JSON.parse(data);
  if (message.type === "state") {
    publishContext(message.payload);
  }
};

onCommand((command) => {
  ws.send(JSON.stringify({
    type: "action",
    correlationId: command.id,
    payload: command
  }));
});

Minimal MQTT relay

# pseudo-code
subscribe("sim/state", handle_state)
subscribe("olink/commands", handle_command)

def handle_state(message):
    publish("olink/context", {"context": summarize(message.payload)})

def handle_command(command):
    publish("sim/action", {"type": "action", "correlationId": command["id"], "payload": command})

Minimal REST relay

# pseudo-code
GET /state  -> summarize into context
POST /action -> send command payload, return ack

Message schema conventions

Use a shared envelope to make debugging predictable.

{
  "type": "state", // "action" | "ack" | "error"
  "timestamp": "2024-04-20T16:33:12.120Z",
  "sequence": 1842,
  "correlationId": "cmd-2205",
  "payload": {}
}

Rate limiting & backpressure

Reconnection handling

Plugging commands into the agent

Ensure every command name matches the Available Commands templates. Map template variables into your action payload, and always emit an ACK so the agent can reason about success vs. failure.

Other Connection Options

9. Context System

Give the agent a running description of the system it controls by publishing readable context messages. Provide the state as plain text so the LLM can interpret it the same way a human operator would.

Include the most relevant details—status, recent actions, or sensor readings—in sentences or short bullet points. This keeps the agent grounded without forcing it to decipher raw data dumps.

Example context payload:

{
  "context": "Robot arm is homed. Current task: assemble panel A. Last command succeeded at 14:32."
}

Publish these updates on olink/context whenever the system changes so the user stays informed. Longer payloads are also supported as long as they remain human-readable. Below are two expanded examples that keep the model grounded without overwhelming it with noise.

Manufacturing Line Example

{
  "context": "Assembly Line 3: Conveyor speed 1.2 m/s. Robot arm completed weld sequence at 15:02 and is waiting for the next chassis. Safety gate closed. QA camera flagged Panel B for manual inspection (scratched surface)."
}

This message blends operational metrics, recent actions, and follow-up requirements. The agent can reference it to justify pausing the line, dispatching a technician, or notifying quality control without asking the operator to repeat known details.

Smart Building Example

{
  "context": "HQ Lobby: Temperature 22.3°C, humidity 46%. Front desk kiosk rebooted at 09:14 after firmware update. Visitor traffic light; no VIP guests expected this hour. Security notified about a propped door on Level 2 Stairwell B."
}

Context streams like this give conversational agents enough situational awareness to answer questions such as "Why is the stairwell alarm active?" or "Is the lobby comfortable for visitors?" while still fitting in a single MQTT payload.

11. Tutorials

Video placeholder for 11. Tutorials.
Curate your tutorial playlist right here with embedded walkthroughs.

Hello World: state → decision → action

This is the first production-shaped tutorial after the basic chat agent. You will stream state, let the agent decide, emit an action, receive an acknowledgement, and log metrics end-to-end.

  1. Spin up a tiny state source. Choose a WebSocket server, MQTT broker, or REST endpoint that publishes a state payload on a timer (every 250–1000 ms).
  2. Run a tiny environment. Use a browser game or simulator that reacts to simple actions (move, jump, rotate, fire).
  3. Subscribe to state. Your connector receives state updates and forwards the most relevant fields to OmniLink as human-readable context.
  4. Decide + send action. The agent chooses a command, your connector maps it to an action payload, and you post it to the environment.
  5. Receive ACK + metrics. The environment responds with success/failure and latency. Log it and publish feedback to the agent.

Recommended state/action/ack envelope:

{
  "type": "state",
  "timestamp": "2024-04-20T16:33:12.120Z",
  "sequence": 1842,
  "correlationId": "round-204",
  "payload": {
    "playerPosition": { "x": 12, "y": 4 },
    "targetVisible": true,
    "health": 92
  }
}

Keep a running log of state → command → action → ack so you can replay or debug. The simplest MVP is a console logger; the next step is a CSV or JSONL stream you can diff between runs.

Testing & evaluation methodology

Agent refactor checklist

  1. Update Main Task to match the new purpose.
  2. Revise Commands so names + arguments match your connector.
  3. Upload or prune Knowledge documents.
  4. Reset memory after changing Main Task or Knowledge.
  5. Run smoke tests from the regression suite.
  6. Deploy and monitor the first live session.

Browse all official tutorials and sample projects on GitHub.

12. Security

OmniLink ships with the default edit-mode password omnilink. Before you deploy an agent for others to use, open Settings → Security, set a new password, and share it only with trusted collaborators. This password controls access to edit mode, so rotating it regularly keeps your deployment safe.

If you are the developer of the agent, please don't give the app password to any of your users, as they can then change the agent's configuration and alter its behavior. It is always best practice to add one more safety layer in your system to verify the commands coming from the agent.

Security model for real systems

Operational best practices

13. Usage Management

Use this section to understand how to make the most of your credits and keep on track with the plan you've chosen.

Open the Usage page from the side menu whenever you want a snapshot of your current activity. The charts and totals show the time you talk to the agent, the time the agent talks to you, and requests made with the G1 Engine and G2 Engine so you always know where your credits are going.

If you notice one category climbing faster than expected, try a few quick adjustments:

Plans cap total monthly credits. The built-in plans are Free ($0 for 5 credits), Silver ($20 for 20 credits), Gold ($50 for 50 credits), and Diamond ($100 for 100 credits). Upgrade when your monthly usage approaches your plan's credit limit so the agent stays available to your users.

Plan limits & platform constraints

Limits vary by plan and are enforced across the UI and APIs. Check these before you scale a deployment:

We recommend linking these limits in onboarding and tooltips so teams don’t discover them mid-build.

Usage examples

Here are a few ways teams use the Usage page to balance different engines and input modes:

14. FAQ

Q: Can I run OmniLink fully offline?
Not yet. In the future will there will be the option to have Local AI Engines

Q: Does it support multiple languages?
Soon. For now only Engilsh is supported.

Q: What should I do if I forget my edit-mode password?
You will need to contact us via OmniLink chat.

Q: Can multiple teammates collaborate on the same agent?
Absolutely. Share the edit-mode password with trusted collaborators so they can sign in and contribute in real time.

Q: Where can I monitor system status and incidents?
Visit the Status page from the side navigation to review uptime history and subscribe to notifications.

Debugging playbook

Logging tips: capture raw state, command, and ack payloads with timestamps. Add correlation IDs so you can trace a single command through every hop.