Restaurant of the Future(Video)
- Image Input from the overhead cameras on the restaurant are processed and path is planned using Image Processing and Dijkstra Algorithm.
- Path send to Arduino using Wifi Module
- Omni chasis based traversal on the given path of the bot to reach the desired table.
- Chatbot starts interacting with the the customer and takes the order while recognising face and emotion of the customer.
- After taking order, Bot traverses back to the initial position, waits for the order to be kept on its serving trays.
- Bot traverses back to the customer on the same path and serves using a 3-tray serving mechanism
- Image is resized to (300 * 300) pixels (sample resized) and edge detection is deployed on grey image (sample).
- Closed objects are assigned white pixel value. Fractal concept deployed for the same(sample).
- Image is smoothened(sample).
- Divided into a (n*n) grid with grid size (d*d), here n = 30, d= 10 (sample Grids.txt).
- Each grid is declared as an obstacle(value =0) or a path(value =1) depending of the threshold value of the white pixel count.
- Grid generated is written in a file and is the input to the shortest path planning - Dijkstra Algorithm(src code). Each unit of the grid in considered as the vertic and adajent vertics are analysed of each unit to generate a matrix of all vertices.
- "Srcdest.txt" file stores the planned path in the form of vertices to traverse on(sample).
- Path string, for instance R2F2B1L1, meaning to move 2 step to the right, 2 step forward, 1 step backward, 1 step Left, is generated by reading vertices.
- Path string is send using the urllib library of python on the url of the esp8266 Wifi module, obtained by connecting PC and Module on the same network.
- Path is received using an ESP module in form of string giving units of motion in each direction. The end of path is detected by adding a letter 'e' at end of each string.
- Each unit length of string is mapped with some unit on ground using an encoder.
- A gyro and PID is used to avoid any rotation about its axis. Encoders and PID are used to avoid any drift.
- Once reached the destination, it sends a signal to the head and head motion starts.
- After getting signal from the head a "reverse" function reverses the string and adds an 'e' at the end to make the bot retrace the path back.
- Cytron motor drivers are used for all three motors of the chassis.
Raspberry Pi act as a central processing system for the bot to recognise customers face and align the head accordingly using servo motors, greet and take order using Chatbot API.
- After reaching the customer in mode 2, the tray motion gets activated.
- The lifting mechanism uses dc motors and encoders, whereas the front-back motion uses stepper motor.
- Using encoders we map the units with distance it has to traverse to reach each of the trays.
- On reaching the tray, the front-back mechanism gets activated.
- The number of units the stepper have to move in front and back is mapped using the standard Stepper motor library.
- The lifting mechanism is carried by DC motors and Cytron motor driver whereas the front-back motion is supported by the stepper motors and the NEX motor driver.
- To make the robot look more customer friendly and increase interaction with customer we needed a multifunctional head.
- The head which is able to align in the direction of customer sitting on table, able to nod when greeting customer and whose eyes blink according to the mood of customer.
- Link mechanism is only able to provide less rotation to the revolute joints.
- Avoid big 3-D prints as it will consume large amount of material which is expensive.
- Slots in 3-D prints are not accurate in dimensions, so provide clearance for the same.
First, a neck was designed. A base was required to mount it on the body of robot named* as ‘base’. Base also contained slot to fit servo motor. Another half of neck was having a gear embedded on itself named ‘upper neck gear’. Neck contained a pair of gears with ratio 1:2. Smaller gear mounted on servo motor. Slot provide the servo ability to slid in and out to perfectly align with other gear. Both parts of the neck were held with a rod with the help of bearings, which also provided smooth moment to neck. Another part named ‘upper base plate’ to hold mechanism for head rotation. So to make head rotate side ways we built a revolute joint named ‘revolute 1’ mounted on base plate with the help of a 6mm bolt. Another revolute was provided to make it rotate up and down named ‘support stand’ it was mounted inside ‘revolute 1’ with the help of a 6mm bolt. The ‘support stand’ also worked as a stand to hold the hollow shell named ‘head shell’ i.e. face. To make these revolute rotate we used mechanism similar to Crank Shaft with 2 servo motors mounted on ‘upper base plate’ and 2 links attached to the head via support stand joining it with servo head. When both the servos are rotated in opposite direction from same starting reference angle the head moves up and down. And when one servo is rotated in any one direction the head tilts accordingly. To provide flexibility to the links for rotation on a curve path we used small pieces of rubber belt at the end of links. Slot to mount LED eyes and camera to detect face were provided on ‘head shell’. All parts were 3-D Printed with PLA. **named :- all names are of their solidworks file.