ASL gesture recognition in the browser.
To run: start the two servers[flask and http-server, instructions below] and open index.html
in a browser
- Getting keys for dialogflow agent
- Serving the CNN model
- Hosting the REST Api locally
- Making changes to
index.js
Google describes Agents
as NLU (Natural Language Understanding) modules. They transform natural user requests into structured, actionable data.
- In the Dialogflow Console, create an agent
- Choose or create a Google Cloud Platform (GCP) Project.
- Dialogflow will automatically setup a
Default Welcome Intent
, which you can try from the test console.
In order for your Bot to access your Dialogflow Agent, you will need to create a service account
. A Service account is an identity that allows your bot to access the Dialogflow services on your behalf. Once configured, you can download the private key for your service account as a JSON file.
- Open the GCP Cloud Console, and select the project which contains your agent.
- From the
nav
menu, chooseIAM & admin
,Service accounts
. - Select
Dialogflow Integrations
(created by default by Dialogflow), or create your own. - Under
actions
, selectcreate key
, selectJSON
and download the file.
You'll have to create a 'keys/' folder in the workspace if it doesn't exist.
Change the projectID
variable in backend.py
to your dialogflow projectID
. (it can be found in your agent settings)
npm install http-server
- To serve the folder:
node_modules/http-server/bin/http-server model_types/tfjs_version -c1 --cors
- It's now available on
https://localhost:8080
(:8080 is the default port)
The REST Api (in flask) talks to dialogflow. It is called by the frontend.
- To host it:
python backend.py
- It's now available on
https://localhost:5000
(:5000 is the default port)
Changes to index.js
will not be reflected in the website. This is because browserify
is used to bundle requirements.
npm install -g browserify
to install browserify globallybrowserify index.js > bundle.js
to bundle up the requirements into a single file.bundle.js
s the file we'll be linking to ourindex.html
This is how everything ties up:
This should be your directory structure(well this is mine):
-
this awesome tutorial on how to access webcam, load a model, and train the model on the browser itself with tf.js .
-
host the(doesnt allow CORS)tfjs_version
directory locally by navigating inside it and typingpython3 -m http.server
npm init
and npm install http-server
and then node_modules/http-server/bin/http-server model_types/tfjs_version -c1 --cors
This allows you to serve a directory with CORS(cross-origin requests) enabled.
Can use the same https-server globally, anywhere if installed with -g
-
This video and the video's article were also helpful in making model in python and loading for tf.js, by sentdex.
-
Importing Keras models to tf.js.
-
This article on how to do some image preprocesing with javascript using tf.js
-
Fixing memory leak by referring to the
tf.dispose()
method, from this website. -
Capturing Keypresses on a website from stackOverflow ofcourse.
-
Testing what keypress is what keycode, tester here. user would use spacebar to confirm action dialogflow integration
-
use
require()
in the browser. look at Browserify docs.npm install -g browserify
andbrowserify index.js > bundle.js
we use require()
to use jquery-sendkeys
, from here.
- BotUI starter from their website.
pip install dialogflow
andpip install flask-restful
for pythonpython backend.py
to obtain an api endpoint which frontend can hopefully query. start the backend flask server with this command before loadingindex.html
example api call http://127.0.0.1:5000/request/camp deadline
- BotUI examples!!!
shift
: captures current predictionspacebar
: inserts a space into the sentence being formedCtrl+E
: sends the sentence formed to the chatbot input feild, clears the sentence bufferenter
: sends what was in the chatbot input feild to the chatbot
- send it directly to the chatbot input and manipulate stuff over there?
- show "sentence buffer" every time finalStr is cleared