Skip to content

This tool provides a web interface to define domain-specific languages and uses large language models (LLM) to interact with those grammars via chat.

License

Notifications You must be signed in to change notification settings

lbdudc/dsl-xpert

Repository files navigation

DSL-Xpert

GitHub license Node.js Version GitHub release (latest by date)DOI

This tool provides a web interface to define domain-specific languages and uses large language models (LLM) to interact with those grammars via chat.

image of the models created

It can validate the generated grammar instances while chatting with the LLM using Langium.

image of the tool

It has integrations with:

  • OpenAI models
  • HuggingFace Inference API
  • HuggingFace transformers: running it locally
  • WebLLM
  • Connect to any server using API REST requests

image of chat

Then the user can chat with the LLM using the grammar defined.

Usage

The project can be run in different ways. The following sections describe how to run the project using Docker or as a standalone server.

Docker

The project can be run using Docker. To do so, execute the following commands:

Prerequisites:

  • Docker
  • Docker Compose
docker-compose up

This will start tha application it wil run:

  • Web App: http://localhost:5555
  • API: http://localhost:5555/api

Standalone server

The application also can be run without docker, as a standalone application. To do so, execute the following commands:

Prerequisites:

  • NVM
  • Node.js
  • MongoDB
  • Python (optional only if the user wants to use the custom huggingface models)
# (Optional), check the .nvmrc file
# for seeing what is the node version
nvm use

npm install
npm run dev

This will start:

  • Client at http://localhost:5173
  • Server at http://localhost:5173/api

It will be necessary to have a MongoDB database running, and set the variable in the .env file:

MONGODB_URI=mongodb://localhost:27017/llm-dsl-builder

HuggingFace custom (only for standalone server)

We already have added the HuggingFace Custom Server into the docker-compose.yml file, so it will be running by default.

But in thr standalone version, the user can run the server using the command:

npm run dev:pyserver

This will start the python server at ws://localhost:8000/ws,the client is already configured to interact with this server.

API Routes

The API provides the following routes:

  • GET /models: Get all models
  • POST /models: Create a new model
  • GET /models/:id: Get a model by id
  • PUT /models/:id: Update a model by id
  • DELETE /models/:id: Delete a model by id

Author

License

This project is licensed under the MIT License - see the LICENSE.md file for details

About

This tool provides a web interface to define domain-specific languages and uses large language models (LLM) to interact with those grammars via chat.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 3

  •  
  •  
  •