Is there documentation to the backend server API?

And/or is there documentation for the DSL it uses?

I had an idea to go from speech → text commands → Either UI manipulation or backend API calls.

You know, the new chatgpt and AI stuff is coming …

It would be a super awesome feature for business users :slight_smile:

Hey! As of this juncture, we’re not looking to integrate the AI features to automatically build the app for you.

Although, could you elaborate on your use case? What features are you looking for specifically?

Thanks for your response. I put this in a discord channel:

not sure where to post this, but I am thinking, hey chatgpt " I want to create admin panels with appsmith only using my speech. So I need a way to go from speech to command, that then get ran in the browser, but it would have to have an understanding of appsmiths code and DOM to move things around"
I was thinking selenium commands, but there must be a good javascript level api to interact with, like maybe via user scripts
does anyone know of ideas of an entry point from the browser to manipulate the UI?

That’s for a little more color.

I don’t mind working on writing a layer or plugin or extension, just was trying to figure out the best way to go about it from you guys’ lead developer points of view.

So far as I can tell, it would be easier to NOT mess with the API or DSL direcly via voice, but rather just stick with actions in the DOM and let the underlying events handle the API calls. That I think would be a lot easier?

Thanks for sharing it. It’ll help us when we actually get to integrating AI features.