AIoT Based Object classification Using Edge impulse & Raspberry Pi

Object detection where each activity is displayed in a direct taxonomy or IP address is a popular topic nowadays. Taking advantage of this, if machines could also recognize things as humans do, that would be a lot of fun.

With Edge Impulse, users can train their own AI and machine learning models without having deep knowledge of programming or AI and machine learning concepts. Edge Impulse is a cloud-based platform that incorporates computing into the Raspberry Pi to capture live video and images via the camera interface.

It can work on both the intranet and the Internet, helping experimenters and hobbyists to showcase their designs and develop various applications to solve problems. Some examples are:

  1. Direct Entrance Door Monitor
  2. Unknown person alert
  3. Classification and separation of industrial objects using robotic arms
  4. Fruit counting on tree or separator machine

Ingredients required

  1. Raspberry Pi 3B
  2. USB camera
  3. keyboard
  4. Monitor
  5. the mouse
  6. Edge Impulse website
  7. SD Adapter (32GB)
  8. HDMI to VGA cable
  9. 5V Power Adapter With USB Type C Connector
  10. SD card reader

development and work

  1. Download Debian-based Raspberry Pi desktop imaging to any PC
  2. Turn on the Raspberry Pi تصوير
  3. Choose an OS like Raspberry Pi OS (32-bit)
  4. Choose SD Card
  5. select write
  6. Insert the SD card into the Raspberry Pi
  7. Connect the Raspberry Pi to the power supply, keyboard, mouse and monitor
  8. If the operating system is installed correctly, a new window will appear that says “Welcome to Raspberry Pi Desktop”
  9. Connect a USB camera to take a picture
  10. Go to the RPi . station
  11. Install the commands below
    • curl -sL https://deb.nodesource.com/setup_12.x | sudo bash –
    • sudo apt install -y gcc g++ make build-basic nodejs sox gstreamer1.0-tools gstreamer1.0-plugins-good gstreamer1.0-plugins-base gstreamer1.0-plugins-base-apps
    • sudo npm install edge-impulse-linux -g-unsafe-perm
  12. Next, go to https://www.edgeimpulse.com/
  13. Enter your name and email ID
  14. Register for free and login to your account
  15. Next, launch Edge Impulse with the following command
    • Edge Drive Linux
  16. If the connection is correct, the device section will appear in the Edge Impulse Raspberry Pi . camera
  17. Here, you can take a picture of anything like a bottle, mug or any face
  18. In the data acquisition section, take at least 100 images of different objects for training and testing purposes. You can rebalance your data with a split ratio of 70:30
  19. Next, go to the dashboard and select the naming method. Must be bounding boxes (for object detection)
  20. Label all the objects via Labeling Queue
  21. Now go to Impulse Design
  22. Image width and height must be 320 x 320
  23. Change the name of the object detection project

  24. save motivation
  25. In the Image section, configure the processing block and select Metadata at the top of the screen. You can save the parameters either in RGB or grayscale
  26. Now go to the Create feature
  27. Due to the different image dimensions, miniaturization will occur
  28. In the object detection section, the numbers for the training course and . are displayed The learning rate is 25 and 0.015 respectively

  29. start training
  30. After training the model, get the accuracy score
  31. To validate your form, go to the test form and select Rate all
  32. Now go to live rating. In real time, an object near the USB camera appears with the appropriate label (such as a bottle or mug)
  33. If you want to see the IP address, run the following command in the RPi terminal
    • Edge Runner Linux Drive
  34. Building and uploading a model in Raspberry Pi
  35. Enter the IP address as http://192.168.1.19:4912 for live rating in Raspberry Pi


Previous articleATMPs: The Foundational Stone of the Semiconductor Era in India

Leave a Reply

Your email address will not be published.