You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have been working on bringing smaller applications of ML to the browser. Taking smaller models and being able to do the ML processing in the browser.
One contention is being able get access to those images, privately, on their computer to apply the ML to. My current implementation is to have the user host an image hosting service that I access through fetch. They provide a JSON of their images and host the images and I access those images locally. There are obvious issues with this approach as I can't host this service outside their computer either.
To go into further examples of the types of problems.
Image Captioning
Image Aesthetic Prediction
Image captioning we'd have a directory of images we want to caption. Using the browser we can run a BLIP model (or a smaller model for the browser). But we'd to have it load up a directory of images.
Image aesthetic prediction we'd have the same idea where we want to create aesthetic scores for a directory of images.
This may be a little outside the interfacing with an AI running locally but I am working in the other direction where I have a model running in the browser but want to interface better with local images, privately.
Wanted to propose it as I have people who can't run python locally, don't want to setup an AI on their system but easily load up a webpage. We can then run the AI locally on their machine through the browser. Specific example I'm using huggingface/candle to run the models in the browser but other options could be available and accessing the GPU using WebGPU could be an option in the future.
If there are alternative solutions to these I would love to hear them too. Thank you.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I have been working on bringing smaller applications of ML to the browser. Taking smaller models and being able to do the ML processing in the browser.
One contention is being able get access to those images, privately, on their computer to apply the ML to. My current implementation is to have the user host an image hosting service that I access through fetch. They provide a JSON of their images and host the images and I access those images locally. There are obvious issues with this approach as I can't host this service outside their computer either.
To go into further examples of the types of problems.
Image Captioning
Image Aesthetic Prediction
Image captioning we'd have a directory of images we want to caption. Using the browser we can run a BLIP model (or a smaller model for the browser). But we'd to have it load up a directory of images.
Image aesthetic prediction we'd have the same idea where we want to create aesthetic scores for a directory of images.
This may be a little outside the interfacing with an AI running locally but I am working in the other direction where I have a model running in the browser but want to interface better with local images, privately.
Wanted to propose it as I have people who can't run python locally, don't want to setup an AI on their system but easily load up a webpage. We can then run the AI locally on their machine through the browser. Specific example I'm using huggingface/candle to run the models in the browser but other options could be available and accessing the GPU using WebGPU could be an option in the future.
If there are alternative solutions to these I would love to hear them too. Thank you.
Beta Was this translation helpful? Give feedback.
All reactions