Jetson Nano-based app using computer vision and a CNN model to analyse sitting posture. Alerts sent for poor posture, with real-time monitoring and user feedback. Optimised for Jetson Nano, with data storage and user statistics. Promotes better spinal health and posture habits. The Posture Corrector App is a computer vision-based application developed using the Jetson Nano platform. It utilises a CNN model, specifically the "Single Pose MoveNet" model from TensorFlow, optimised to ONNX then TensorRT to analyse the user's sitting posture in real-time. By calculating hip and neck angles and other distances between key body joints detected by the model, the app determines whether the user is sitting upright, forward, or reclined.
- Real-time posture analysis using the Jetson Nano's integrated camera (Raspberry Pi Camera Module 2).
- Real-time alerts sent to the Django web app for poor posture detection.
- User-friendly web interface for monitoring and feedback.
- Data storage in a PostgreSQL database for historical analysis.
- Authentication system for user profiles and personalised statistics.
- Spinal health promotion through posture correction.
- Photos of incorrect postures sustained throughout the monitoring video.
Following is a short demo that demonstrates the monitoring system in action. The Jetson Nano keeps track of incorrect posture and notifies the user on the web app if the incorrect posture is sustained for 10 seconds. If the incorrect posture doesn't persist for the set duration, the app won't notify the user, assuming that the user is moving. This mechanism has been implemented to provide flexibility and avoid overwhelming the user with alerts.
real-time.alerts.demo.1.mp4
- Install the necessary dependencies and libraries listed in the requirements.txt file using the following command:
pip install -r requirements.txt
- Create a postgres database and link it to the app in settings.py.
- Pass your IP address to the allowed_hosts in settings.py.
- Make migrations using the following command:
python manage.py makemigrations
- Migrate using the following command:
python manage.py migrate
- Run the server using the following command:
python manage.py runserver <host-address>:<port>
- Access the web app through a browser to monitor posture, view statistics, and provide feedback.
- Write the image into a SDcard following the instruction from nvidia website https://developer.nvidia.com/embedded/learn/get-started-jetson-nano-devkit#setup.
- Connect the Jetson Nano to the camera module and ensure it is properly configured.
- Install TensorRT, numpy, pycuda, and the latest version of OpenCV on the Jetson Nano.
- In config.yaml set the host to the address where you're hosting your Django app, and the port to the port where the app is listening.
server:
host: <host-address>
port: <port>
- Open a terminal and type in the following command:
LD_PRELOAD=/usr/lib/aarch64-linux-gnu/libgomp.so.1
- Run the monitor.py script and authenticate to your account created on the Django app.
Once the monitoring script is running, and the authentication is complete on the jetson nano side, the user is prompt to choose an angle to be monitored from, there are 3 options:
- Lateral Right
- Frontal
- Lateral Left
Some of the body key joints detected by the moveNet model are used in order to determine the user posture as shown below:
The original model was in the form of a Tensorflow-lite file, that was optimised to an ONNX file, and then to a TensorRT engine in order to speed up the inference. The following table shows the performance of each model on the Jetson Nano:
Model Format | Memory Usage (megabytes) | Video Latency (seconds) | Inference Time per Frame (seconds) | Average FPS |
---|---|---|---|---|
TFLITE file | 328.11 MB | 5.78 s | 0.45 s | 2.26 |
ONNX file | 98.06 MB | 4.8 s | 0.36 s | 3.00 |
TRT file | 156.86 MB | 0.44 s | 0.038 s | 19.00 |
Contributions to the Posture Corrector App are welcome! If you have any ideas, bug fixes, or improvements, please submit a pull request. Make sure to follow the established coding style and guidelines.
This project is licensed under the MIT License. See the LICENSE file for more information.
For any questions or inquiries, please reach out to [email protected]