i want my output screen to be like half of the page to be of my webcam
and the half should be able to display the output of the gesture which is displayed in the webcam in the form of text
What script or library are you using for detecting the hand gesture?
Soo we are using mediapipe to convert the gestures into landmarks values
And then we are storing the landmark values in a csv file
We use SVM to train and test the csv file
Soo our UI needs to be like
Half of the screen should be showing the webcam which captures the gestures
And then the other half consist of output for those gestures in the form of text and voice
Soo as I’ve watched your video to show our webcam in web page but how can we do the other half of that!!?
Python script
It seems like you have training data. You are looking to put this data to work on a live feed and display the output of any detected gestures.
Do you have the output of your gesture detection appearing in a HTML web page? If you don’t have it appearing in some sort of a browser environment, this is where it will be getting tricky.
Noo sir weren’t having anything and we need to do it by ourselves
Unfortunately, I don’t have too much guidance on how you can do this. At a high-level, what you need is a server backend that receives your video feed as input, runs your Python script to analyze the feed, returns the results in a form (like JSON) that JavaScript can easily parse and display in your web page.
Have you looked into TensorFlow JS? It does a lot of work on the client that can simplify much of what you are trying to do