Skip to content

A new package would process textual descriptions of virtual reality scenes or environments and return structured, validated outputs that describe the scene in a standardized format. It would use an LL

Notifications You must be signed in to change notification settings

chigwell/vrscene-parser

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 

Repository files navigation

vrscene-parser

PyPI version License: MIT Downloads LinkedIn

vrscene-parser is a lightweight Python package that transforms natural‑language descriptions of virtual‑reality (VR) scenes into a structured, validated format.
It leverages an LLM (default ChatLLM7) to interpret the input text and ensures the output matches a predefined regular‑expression pattern, making the results consistent and ready for downstream processing (scene generation, tagging, metadata extraction, etc.).


Installation

pip install vrscene_parser

Quick Start

from vrscene_parser import vrscene_parser

user_input = """
A futuristic city with neon lights, flying cars, and a large holographic billboard
displaying a rotating 3D logo in the center of the main square.
"""

# Use the default LLM (ChatLLM7). The API key is read from the environment variable LLM7_API_KEY.
response = vrscene_parser(user_input)

print(response)   # -> List of extracted data that matches the required pattern

Function Signature

def vrscene_parser(
    user_input: str,
    api_key: Optional[str] = None,
    llm: Optional[BaseChatModel] = None,
) -> List[str]:
Parameter Type Description
user_input str The free‑form description of a VR scene that you want to parse.
api_key Optional[str] API key for ChatLLM7. If omitted, the function reads LLM7_API_KEY from the environment, or falls back to a default placeholder.
llm Optional[BaseChatModel] Any LangChain‑compatible LLM instance. If omitted, the package creates a ChatLLM7 instance automatically.

Using a Custom LLM

You can replace the default LLM with any LangChain chat model (e.g., OpenAI, Anthropic, Google Gemini). Just pass the model instance to vrscene_parser.

OpenAI

from langchain_openai import ChatOpenAI
from vrscene_parser import vrscene_parser

my_llm = ChatOpenAI(model="gpt-4o-mini")
response = vrscene_parser(user_input, llm=my_llm)

Anthropic

from langchain_anthropic import ChatAnthropic
from vrscene_parser import vrscene_parser

my_llm = ChatAnthropic(model="claude-3-haiku-20240307")
response = vrscene_parser(user_input, llm=my_llm)

Google Gemini

from langchain_google_genai import ChatGoogleGenerativeAI
from vrscene_parser import vrscene_parser

my_llm = ChatGoogleGenerativeAI(model="gemini-1.5-flash")
response = vrscene_parser(user_input, llm=my_llm)

API Key for ChatLLM7

The default LLM is ChatLLM7 from the langchain_llm7 package (see https://pypi.org/project/langchain-llm7/).
Free tier rate limits are sufficient for most development and testing scenarios.

  • Provide the key via environment variable: export LLM7_API_KEY="your_key_here"
  • Or pass it directly:
response = vrscene_parser(user_input, api_key="your_key_here")

You can obtain a free API key by registering at https://token.llm7.io/.


Error Handling

If the LLM response does not match the expected regular‑expression pattern, the function raises a RuntimeError with the underlying error message.


Contributing & Support


Author

Eugene Evstafev
✉️ Email: hi@eugene.plus
🐙 GitHub: chigwell


License

This project is licensed under the MIT License. See the LICENSE file for details.

Releases

No releases published

Packages

No packages published

Languages