[GSoC 2026] OpenHands + OVMS Integration — first contactt #34270
Replies: 10 comments 1 reply
-
|
@adrianboguszewski and @mlukasze can you help me connect with the mentors. I am unable to tag them here. Thanks. |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
|
Hi @devesh-047,
In fact only gap is "id" field which OVMS in fact omits in responses. But i wonder if that's an blocker for integration? We uses Openai python client an that have never been a problem. |
Beta Was this translation helpful? Give feedback.
-
|
Thank you for the clarification — that’s very helpful. Regarding the id field, my earlier note was based on testing with the openai Python client where the response model defines id as a required field in the schema. I observed a validation issue in one of my local tests, but I haven’t yet verified this against the exact client version and configuration used by OpenHands. I’ll re-test using the same dependency stack as OpenHands to confirm whether the missing id is actually a practical integration blocker or just a schema-level difference. If it turns out not to affect real usage, I’ll update the gap classification accordingly. Thanks again for pointing this out. Devesh |
Beta Was this translation helpful? Give feedback.
-
|
I’ve completed the end-to-end OpenHands + OVMS integration and documented the results and y observations. The github url is :https://github.com/devesh-047/openhands-openvino-integration What works: What is not working (WSL2, 8 GB limit): So far, the integration is architecturally sound. The limiting factor appears to be model capacity and memory constraints rather than API compatibility. Detailed observations and memory breakdowns are in: I’d appreciate guidance on whether the 90-hour scope should focus on architectural validation + hardware sizing guidance, or target demonstration with a 7B-class model on ≥16GB RAM.
This image shows the working , But the answer is a visible hallucination. Also it take a long time for response generation. The above response was received after around 10 minutes .
Do let me know your thoughts and suggestions on the demo. |
Beta Was this translation helpful? Give feedback.
-
|
Just a quick follow up on the integration update I shared earlier. any feedback on the direction of work would be appreciated |
Beta Was this translation helpful? Give feedback.
-
|
@devesh-047 |
Beta Was this translation helpful? Give feedback.
-
|
@michalkulakowski , Thanks for you guidance |
Beta Was this translation helpful? Give feedback.
-
|
@michalkulakowski, @mzegla Thanks for your guidance. |
Beta Was this translation helpful? Give feedback.
-
|
Since the application period for GSoC 2026 is nearing its deadline,I would appreciate some feedback on the proposal I have submitted on the GSoC portal. Thank you for your guidance throughout this period. |
Beta Was this translation helpful? Give feedback.

Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello Michal and Milosz,
I’ve been working on the “OpenHands + OpenVINO Model Server” project idea and built an initial prototype to understand the scope before drafting a formal proposal.
OVMS is running (Docker, CPU-only) serving a quantized TinyLlama model via the MediaPipe LLM pipeline. I’ve validated /v3/chat/completions, streaming behavior, and documented observed OpenAI-compatibility gaps.
Full details, gap notes, and my tentative 90-hour execution plan are here: https://github.com/devesh-047/openhands-openvino-integration/blob/master/docs/gsoc/gsoc_first_contact.md
Before proceeding to full OpenHands integration, I’d appreciate your feedback on whether this direction aligns with expectations — especially regarding model size and how to handle strict OpenAI schema gaps.
Thank you for your time.
Beta Was this translation helpful? Give feedback.
All reactions