This article summarizes a recent CustomGPT office hours session focused on product updates, UI improvements, and practical integrations. The session covered new interface features, security controls, embedded chat widgets, automation workflows using external tools, and a white-label Chrome extension.
TL;DR
- Problem: Custom AI agents often require custom UI work, limited permissions, or fragmented integrations.
- Solution: CustomGPT introduced UI-level action buttons, granular permissions, simplified embedding, and ready-made automation templates.
- Outcome: Teams can deploy secure, branded AI agents across websites, workflows, and browser extensions with less setup effort.
Context of the session
The session was hosted by the CustomGPT team and provided a walkthrough of product improvements shipped over the previous two weeks. The focus was on usability, deployment flexibility, and real-world integration scenarios rather than roadmap announcements.
The presenters highlighted that many of these updates were driven directly by user feedback from community Slack discussions and prior office hours.
UI action buttons inside agent responses
One of the most visible updates was the introduction of customizable action buttons directly inside the chat interface.
Agents can now return responses that include a single contextual button, such as “Schedule a call with an expert” or “Book a demo.” These buttons are configured at the agent level and rendered natively in the UI.
- Buttons inherit the visual styling of the host website.
- They can be enabled or disabled per agent persona.
- The feature currently supports one button per agent response.
This feature is UI-specific and not yet designed as a first-class API object. When accessed via the API, the button data is included as tagged metadata that must be rendered manually by the consuming application.
Granular permissions for MCP access
Another major update addressed security concerns around MCP links.
Previously, MCP access links allowed full read, write, and delete permissions. The new permission system enables fine-grained control over what connected agents or external tools are allowed to do.
- Read-only access can be enforced.
- Write access can be limited without delete permissions.
- Delete operations can be fully disabled.
This reduces the risk of accidental or malicious data loss when MCP links are shared with third-party agents or collaborators.
Simplified embedded chat widget
The session included a live demonstration of a newly redesigned embedded chat widget.
Previously, embedding a chatbot required deploying the full starter kit. The new approach decouples the UI from the starter kit entirely.
- The widget is deployed as a standalone UI.
- Deployment requires hosting the UI and adding a single script tag.
- No tight coupling with the starter kit is required.
The widget supports citations, expandable sources, copy actions, and optional voice playback of responses.
Voice mode and 3D avatar support
The embedded UI now includes an optional voice mode.
Users can interact with agents using speech-to-text, while responses can be delivered using text-to-speech. A 3D avatar can lip-sync responses for a more interactive experience.
This mode is optional and can be embedded directly on a page without using a floating chat icon.
Multiple deployment options
The platform now offers two main deployment models for the UI.
- A Python backend with a Vue frontend, deployable via Docker.
- A Next.js version with one-click deployment on Vercel.
This allows teams to choose between a more customizable stack or a fast deployment path with minimal configuration.
Automation workflows using the CustomGPT API
The session also demonstrated how the CustomGPT API can be used as a backend knowledge layer in automation workflows.
An example workflow used Google Sheets as an input source, where questions were passed to the API and answers were automatically written back into the sheet.
- The workflow can be triggered via tools like n8n or Make.com.
- Responses are generated using the agent’s knowledge base.
- The setup serves as a reusable template for email auto-replies or ticket handling.
A ready-made JSON blueprint was shared to allow users to import the workflow directly into their automation tool.
White-label Chrome extension
The final demo covered a white-label Chrome extension powered by the same agent UI.
The extension consists of a lightweight frontend and a proxy server connected to the CustomGPT backend.
- The extension mirrors the same chat UI and behavior as the embedded widget.
- It can be branded and published under a custom name.
- Users can interact with agents directly from the browser toolbar.
This enables teams to distribute AI assistants without requiring users to visit a website or application.
API considerations
Questions during the session clarified how UI features map to API usage.
UI elements such as action buttons are included in API responses as tagged content rather than structured UI objects. Developers integrating via the API are expected to parse and render these elements themselves.
The platform team indicated that more formal API support may be explored in the future.
Why this matters
These updates collectively reduce the friction between building AI agents and deploying them in real products.
By offering UI components, automation templates, and distribution options out of the box, CustomGPT positions itself as an infrastructure layer rather than just an agent builder.
FAQ
Can action buttons be used via the API?
Yes, but they are delivered as tagged metadata and must be rendered manually by the client application.
Is the embedded widget open source?
Yes. The UI is open sourced and can be customized or extended.
Can the Chrome extension be white-labeled?
Yes. The extension can be branded, renamed, and published for custom use cases.
Some links may be affiliate links. This helps support the site at no additional cost and does not influence the content or reviews.
