“The future depends on what you do today.”
Gandhi
ARGait

An augmented reality system that gives users real-time, visualisation of foot kinematics in three-dimensional space.
It allows the user to constantly ‘see their feet’ while moving around and will eliminate any need for external visual aides—such as mirrors. Allowing a self guided, immediate modification of foot placement with each step.
How they work
The system uses the smartphone’s camera to continuously capture real-time video of the surrounding environment. This live video stream is rendered to the phone’s display as the primary visual layer. Overlaid on this feed are dynamically generated graphical representations of the user’s feet.
These virtual foot models are aligned with the physical position of the user’s feet using:
• TOF sensor to determine foot’s distance from ground (VL53L1X)
• Gyroscope and accelerometer data to measure the foot’s angle (BNO055)
• Magnetometer to determine the whole foot’s magnetic direction (idem)
There are also 4 pressure sensors in the soles of each foot. The data from the sensors is used when both feet are on the ground. At this point they are used for weight-bearing training.
Each foot has its own dedicated i2C bus to a Teensy 4.1 microcontroller. The Teensy is attached to the back of the belt and then via its USB jack to an Android phone in the headset.
The graphics are done with the Unity Engine.
The system uses the smartphone’s camera to continuously capture real-time video of the surrounding environment. This live video stream is rendered to the phone’s display as the primary visual layer. Overlaid on this feed are dynamically generated graphical representations of the user’s feet.
These virtual foot models are aligned with the physical position of the user’s feet using:
• TOF sensor to determine foot’s distance from ground (VL53L1X)
• Gyroscope and accelerometer data to measure the foot’s angle (BNO055)
• Magnetometer to determine the whole foot’s magnetic direction (idem)
There are also 4 pressure sensors in the soles of each foot. The data from the sensors is used when both feet are on the ground. At this point they are used for weight-bearing training.
Each foot has its own dedicated i2C bus to a Teensy 4.1 microcontroller. The Teensy is attached to the back of the belt and then via its USB jack to an Android phone in the headset.
The graphics are done with the Unity Engine.
The graphical system has two modes that automatically become active depending on what the user is doing. The two modes are 1) In-motion 2) Standing.
Interface
1) In-motion mode : When either foot is off the ground the user sees the graphical representation of each foot moving in 3-D space.
The dynamic bar graph, at the top of the interface, indicates amount of time in milliseconds for each foot's swing phase duration.

NB: For clarity...the camera view has been removed from these graphics...but not the video below.
2) Standing mode: When bilateral foot contact is maintained for more than 15 seconds, the system transitions into the Standing Mode interface. In this mode, the on-screen graphical interface updates to display both feet, representing a direct comparison of bilateral weight-bearing.

The system performs real-time analysis by comparing pressure data from corresponding sensor groups: a) Forefoot comparison: Pressure values from the anterior sensors of each foot are evaluated against one another. b) Heel comparison: Pressure values from the posterior sensors are similarly compared.
This mode provides a symmetrical assessment of plantar load distribution between the left and right foot.

Conclusion and considerations
Longterm potential: 10/10
Limitations
-
There are approximately 7 angles per leg to analyse during a normal walk cycle. This prototype only addressed 2 of them.
-
Problems with Android access permissions with Unity using either Bluetooth or USB. (I can't figure them out)
-
Issues with the BNO055 needing regular recalibration and drifting. (Using BNO085 now instead)
Road ahead
-
To address the limited angles analysed I am putting a small module on each leg segment, each with its own microcontroller. For a total of 16 sensors. The modules communicate wirelessly with ESP_NOW.
-
Would love to collaborate with someone on the communication issue, I’ve been pulling my hair out over it for long enough now! (Quite sure it is an Android permissions issue...)
-
Creative possibilities are endless with virtual training challenges, etc.
-
Learning TinyML and TensorFlow at the moment. It will be brilliant to incorporate AI into the interface for real-time verbal explanations and guidance.