Hey all, I hope you’re having a great week so far.
Today, I’m excited to share that we’ve brought out the AI Assistant Sidebar in Beta. You can now chat with an AI assistant directly inside Directus and use it to manage your schema, trigger automations, create content or explore your data through natural conversations.
The docs for it are here, in short, it requires an OpenAI or Anthropic API key. Administrators can configure API keys in Settings → AI.
The AI Assistant is currently in beta and with that being said, the features and tools may change as we refine the experience based on feedback. If you’ve checked it out and have any feedback please let us know here. If I can’t get to the replies, someone else on the team will.
We’re long time (paying)users here on a self-hosted instance. We just updated and noticed this AI feature.
Could you tell us how do we deactivate so it doesn’t appear please? At the moment, I can’t find such option in settings. I guess that must be somewhere, you wouldn’t be the kind of company that force their users into AI right?
Well, why this feature isn’t treated in the same way as dashboard and else then?
Why showing a feature to our users if it’s not activated? That smells like a dark pattern to me honestly.
I’ll go with the CSS bandage for now but that would really be more elegant to simply not displaying it if it’s not activated. I don’t see why disabling an empty feature should be buried into an ENV config.
Best,
PS: Sorry if my tone may sound a bit direct — I really like Directus it’s a great project — but AI is shoved into everything without asking users, and going the selfhosted way is a path for us to avoid such behavior : to stay in control of our tools. I thought such intrusive behavior would be avoided coming from you folks, come on you can do better <3.
Hate to hear that we’ve let you down here. And no worries about the tone. Direct open communication is preferred and we welcome all feedback - good, bad, or ugly.
I do understand where you’re coming from and respect your opinions on AI.
I also want to be clear that we’re not forcing anyone to use AI. And I promise you no one dislikes the prevalent over-promise, under-deliver AI hype cycle more than our team.
We’ve taken (and will continue to take) a careful, measured approach to AI at Directus. We want to ensure we’re providing actual value to our users with those features. If you’d rather not use AI, we respect that choice as well.
As to why it’s shown by default, it’s simply visibility. No grand conspiracy or anything.
There’s a lot of “lego blocks” in Directus. In the past, there’s been a lot users who could benefit from those features that were unaware they exist. That’s our own fault of course. And we plan do a better job with education in 2026.
As mentioned, there will soon be a way to opt out entirely for users who prefer to hide and disable any AI features.
Thanks again for sharing your feedback. And I’ll make sure it gets reviewed by our team as well.
The new AI features are exciting, but limiting them to just Anthropic and OpenAI is disappointing for users like me who uses neither. Adding support for local models and/or Openrouter would make this feature accessible to a much wider audience.
Agreed, though I get why they would limit the providers for quality control and to reduce the scope of damage if using a “bad” model. I’m investigating how to override the URL for openai-compatible and anthropic-compatible models at the moment.
Will it be possible to just drop in an excel/csv file with data and tell the AI to fill up directus tables with its data? I know this works via mcp, but would be even cooler if it is directly possible in the directus chatbot. Anyway, thanks for building these features!
I would like to see a configuration in access policies of whether this feature can be used by users governed by that policy (similar to how there is “App Access” and “Admin Access”).
The scenario for our company is we have a couple different teams working in Directus and the manager of one of the teams would want to disable the AI chat for their team or at least make it read only, because of lack of bandwidth to fully train that whole team on how to use it reasonably and validate the changes it will make to the data. But we have other teams that are fully trained on AI best practices and would greatly benefit from the AI chat, hence the request for a way to enable/disable it via policy.
Until we’re able to control who can and cannot use the AI chat, it is hard for us to roll it out.
Hey! I had 2 problems using directus ai and mcp that made them useless for me..
I cannot access chatgpt because we are located in Iran and we unfortunately they have sanctioned us.. but there are another service providers with generic open ai endpoints that some tools (like cursor) accept them. Can you make possible using another endpoint with generic openai style?
I cannot use custom ai_prompts using mcp.. is there any toturial so i can debug? I’ve added a single ai prompt, but cursor cannot access that.
While we are using directus in a multi tenant way, being able to limit users for ai using permissions and monitoring cost of their usages (or using their own llm token) is mandatory..