Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for loading and exporting large OBJ files #16271

Draft
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

JonathanIcon
Copy link
Contributor

This pull request introduces support for loading and exporting large OBJ files without requiring the entire file to be loaded into memory as a single string.

Loading Enhancements
The default behavior remains unchanged when using BABYLON.SceneLoader, ensuring full backward compatibility.

However, SceneLoader still fetches the entire file, as I could not find a way to prevent this.

To leverage an incremental reading and building mechanism, the OBJFileLoader class should be used directly, importing meshes without the data parameter:

const objLoader = new BABYLON.OBJFileLoader();
const { meshes: newMeshes } = await objLoader.importMeshAsync(null, scene, null, "https://file.obj");

This new loading mechanism supports both .obj and .obj.gz files, allowing seamless handling of compressed OBJ files.

Parsing Fixes
Fixed an issue where some OBJ files with multiple space-separated values (e.g., f 1 4 5) were incorrectly parsed.
The new behavior correctly normalizes them to f 1 4 5, addressing a case that was not handled by the existing solidParser string manipulation methods.

Exporting Changes (Breaking Change)
The export method has been modified to return an iterator instead of a full OBJ string.

This change aligns with the new incremental approach, allowing the data to be processed in chunks instead of requiring full memory allocation.

The new export method is not backward-compatible but enables efficient writing via a stream writer to output the data progressively to a file or another destination.

For those who still need the full OBJ as a string, it can easily be converted using:

const obj = [...BABYLON.OBJExport.OBJ(meshes)].join("\n");

@bjsplat
Copy link
Collaborator

bjsplat commented Mar 11, 2025

Please make sure to label your PR with "bug", "new feature" or "breaking change" label(s).
To prevent this PR from going to the changelog marked it with the "skip changelog" label.

@bjsplat
Copy link
Collaborator

bjsplat commented Mar 11, 2025

@bjsplat
Copy link
Collaborator

bjsplat commented Mar 11, 2025

@bjsplat
Copy link
Collaborator

bjsplat commented Mar 11, 2025

@RaananW
Copy link
Member

RaananW commented Mar 11, 2025

Hi!

First of all, thanks a lot for the PR!

We are currently in a code freeze, aiming to release babylon 8.0 in a few weeks. We let bug fixes and feature additions in, but some PRs will not be merged, even if approved, until 8.0 is released. Just a heads up.

Regarding this PR - I am not sure this is something we want to support directly in our obj file loader. I can see it implemented at a higher level, maybe, to, in general, support compressed filed, but we will have to be very careful with what is being used, making sure all of our supported environments (including babylon native) supports these features.

This can always be an external OBJ File loader that is added to the loaders list from a different package. I personally love the idea of making babylon as extendable as possible, and I believe this kind of loaded is a perfect example for it.

Anyhow, specifically to your PR - we don't fetch directly using fetch, due to many different reasons. You should use the Tools.LoadFile function to load the URL.

@JonathanIcon
Copy link
Contributor Author

Hi!

It's not just about supporting compressed files, but rather about fixing the bottleneck caused by the fact that the function loading OBJ files in Babylon first loads the entire file into a string before parsing it and creating the meshes. This causes the browser to crash when trying to load a file larger than 50MB.

I'm not an expert in the Babylon repository, so I'm not sure where this should be integrated at a higher level. However, without this functionality, Babylon.js is limited to loading only small files and consumes more memory than necessary.

Is there a mechanism using the File Loader tool that allows reading the file in chunks and processing it progressively as it is being read?

@RaananW
Copy link
Member

RaananW commented Mar 11, 2025

Hi!

It's not just about supporting compressed files, but rather about fixing the bottleneck caused by the fact that the function loading OBJ files in Babylon first loads the entire file into a string before parsing it and creating the meshes. This causes the browser to crash when trying to load a file larger than 50MB.

I'm not an expert in the Babylon repository, so I'm not sure where this should be integrated at a higher level. However, without this functionality, Babylon.js is limited to loading only small files and consumes more memory than necessary.

Is there a mechanism using the File Loader tool that allows reading the file in chunks and processing it progressively as it is being read?

Nope, there isn't. This is why I suggested this might need to go somewhere else.
Will require a change in the way we consume the loaded files in all of our loaders.

It could be added as an optional parameter to the LoadFile function to return the data in chunks (maybe using the onProgress callback) instead of the entire data in the onSuccess callback.

@RaananW
Copy link
Member

RaananW commented Mar 11, 2025

Moving this to draft to continue the discussion.

@RaananW RaananW marked this pull request as draft March 11, 2025 15:56
reader = response.body.getReader();
total = parseInt(response.headers.get("Content-Length") ?? "");
} else {
const ds = new DecompressionStream("gzip");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A note here, DecompressionStream api is relatively new, fallbacks might be needed.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I haven't noticed any issue on the official documentation https://developer.mozilla.org/en-US/docs/Web/API/DecompressionStream; every browser is green checked.
What fallback do you suggest ? is it acceptable to include an external lib such as pako ?

@JonathanIcon
Copy link
Contributor Author

It could be added as an optional parameter to the LoadFile function to return the data in chunks (maybe using the onProgress callback) instead of the entire data in the onSuccess callback.

How can we go ahead with this ?

@RaananW
Copy link
Member

RaananW commented Mar 12, 2025

It could be added as an optional parameter to the LoadFile function to return the data in chunks (maybe using the onProgress callback) instead of the entire data in the onSuccess callback.

How can we go ahead with this ?

In any way, even if approved, this will not be merged until 8.0 is released.

IMO - the best way to move forward is to release an external OBJ loader that uses this mechanism and provide it as a standalone package. It can be replaced with the babylon OBJ loader for people who requires this.

There are a few reasons to that -

First, 8.0 release.
Second, Modern APIs and fallbacks. I saw your comment, and in general you are totally right. However, you expect everyone to always have the most updated browser, which is not the case. There is also Babylon Native to consider, which does not implement the entire esm standard. I assume this function is not available there, which will break Babylon native - a big issue for us.
Third - The standard implementation must use the Tools loading functions. There are a few reasons for that, which I will not go into here, but you can't use fetch directly. So it must be implemented at a deeper level within the framework. I am not sure if you took the time to look into the architecture of LoadFile and its siblings, but it is not a simple API. To this comes
Fourth - Backwards compatibility - any change to our public API must be backwards compatible. So we can't suddenly move to callback-based consumption of the LoadFile mechanism. So if you want to implement it using LoadFile, you must add a flag to LoadFile (which must be propagated further down the chain).

I don't mean to be a downer here, I actually really like your idea. It is just not as simple as changing the OBJ loader. there is a lot to it. We will be happy to look into it, based on this PR, if you want to submit a feature request on the forum! Otherwise, really, the simplest would be an external package that any user can consume.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants