Embedding UCL onto your AI Agent
This guide will take you through each step, from setting up your environment to seamlessly integrating your tools into your environment.
Embedding refers to integrating UCL directly into your AI agent’s environment so it can take real actions, like sending messages, updating records, or triggering workflows, using your connected tools.
When you’re integrating Connectors into your SaaS product, Unified Context Layer (UCL) makes it easy to connect 1000+ tools (Actions), manage customer environments, and orchestrate real-world actions from a single command layer.
In this guide, we will walk you through step by step from setting up your space to fully embedding your tools into your environment, including but not limited to:
Setting up your account on UCL
Enabling your connectors and actions
Connecting your UCL MCP Server
Implementing it onto your application/product
Real-World Example: Document Management Workflow
When a marketing team creates a new campaign brief, a single UCL command automatically:
Creates a new project document in Notion
Shares the brief via Gmail to stakeholders
Sets up a dedicated Slack channel for discussions
Creates a collaborative Google Doc for content drafts
How Embedding Works?
Embedding via UCL refers to the process of integrating the UCL functionality directly into your product or AI agent's codebase. This allows your application to seamlessly interact with external tools and services through UCL's unified interface.
This is how your AI agent looks with and without UCL embedding:

The embedding process enables you to:
Execute commands across multiple tools without managing individual integrations
Handle authentication and permissions across different customer workspaces
Maintain secure tenant separation while accessing various third-party services
Manage API connections and token storage through a single interface
Let's further understand embedding via a use case example below:
Step 1: Creating Your UCL Account
Go to ucl.dev and sign up for an account.
After you log in, you can select a workspace, after which you'll be directed to the next page to set up your integrations.

Step 2: Choose Apps & Actions
After selecting your workspace, you will be directed to the Connect Page in your Dashboard, you can Choose the Apps You Need along with viewing your selected apps and enabled actions.

Actions represent the functions you can perform within each app.
From the Connecting Existing Apps button, browse and connect to any app available in UCL.

You can select any app by checking the top-right box on the app and click on Select Tools to modify your action selection and then finalize this setup by clicking the Confirm Action button in the right-bottom.

Step 3: Setting Up Your Environment
The Embed page on your UCL dashboard has step-by-step guides for a number of clients.
In this case, you can use the Fastn UCL [Gitub Template Starter] for embedding UCL into your custom AI Agent.

Within the first embedding step, we’ll focus on setting up your codebase environment where you can embed UCL easily, to ensure an easy-to-follow process, you can find an example environment repository to clone below:
git clone https://github.com/fastnai/embedded-multitenant-ai-assistant.git
cd embedded-multitenant-ai-assistant
npm install
If you wish to run the code locally on your code editor, simply follow the README documentation within the GitHub repository.
Environment via Code Editor
Please ensure that you've read the ReadMe file in the GitHub repository before starting.
Click on the code button to download the ZIP file.

After downloading, open the file in your respective code editor.
Setting up your Keys and IDs
Once you've set up your code environment, you'll see a .env file that consists of two environment variables:
OPENAI_API_KEY
NEXT_PUBLIC_FASTN_MCP_SERVER_URL

Then head over to https://platform.openai.com/api-keys and generate your API key. This API key will ensure that an LLM is integrated with your MCP.
Head back to the .env file and insert your generated API key within the OPENAI_API_KEY environment variable.
The NEXT_PUBLIC_FASTN_MCP_SERVER_URL contains your Space ID and Tenant ID.
Keep in mind that each workspace has a unique Space ID.
In the next step for setting up the environment variables, copy the command, and paste it into the NEXT_PUBLIC_FASTN_MCP_SERVER_URL variable in the .env file:

Step 4: Implementation
Now that you've set up everything, you'll now see UCL embedded in action, simply follow the steps below:
Head over to your code editor, open your terminal, write and execute command:
npm run dev
Once you have executed the command, you will see a local host site showcasing UCL embedded within an AI agent, where you can put your connectors and tools to test as shown below:

Additionally, in the Tools section of the demo app, you can also see the actions that you have enabled for each connector as shown below:

Environment via Codespaces - Alternative Method
Codespaces provides a complete, pre-configured development environment in the cloud, allowing you to start coding instantly without worrying about local setup or dependencies.
Simply access the repository → Click on Code → Then click on “Create codespace on main”

After creating a codespace environment you’ll see a live code editor waiting for you

Conclusion
UCL simplifies the process of embedding AI capabilities into your applications by providing a unified layer for connecting tools and managing multi-tenant workflows. Through its MCP server and Agent Connect component, developers can quickly implement secure, scalable integrations across different customer workspaces while maintaining proper isolation and authentication.
Next Steps:
Explore Fastn Docs
Or request a demo for advanced onboarding
Last updated
Was this helpful?