WELCOME TO THE NEW PLATFORM FOR THE OPENSIM DOCUMENTATION

Remember to update your bookmarks with our new URL.
If you find missing content or broken links, let us know: opensim@stanford.edu


Accurate orientation estimation of the head/neck using inertial measurements units

Team Members

  • Richard Hsiao
  • Michael Fanton

 

Software:

Matlab Scripting:

 

 

Opensim Model:


Description

This project focuses on using inertial measurement units (IMUs) in the "wild," meaning using IMUs to capture state-specific data in a non-laboratory and non-controlled environment. IMUs are generally used to capture linear acceleration and angular velocity data using gyroscopes and accelerometers. Through some integration, these linear accelerations and angular velocities can then be used to determine the position of the IMU,which provides a useful way to capture biomechanical data such as joint angles and body positions. However, one of the largest issues with capturing real-time position data is the noise apparent not only in the environment, but also in the sensors themselves. Using the integration method to determine positions and joint angles introduces drift error - a systemic artifact due to the integration that causes the data to shift towards one direction. Existing studies have shown that applying a Kalman filter could ameliorate the effects of this drift. 

Our project aims to use IMUs to capture orientations of the head/neck and then simulating these orientations in OpenSim, where we can analyze and the kinematics and forces acting on the head and neck. The XSens models outputs neck joint angles as a value through the use a two-pivot neck joint model (Young, 2010). Other biomechanical models have muscles constrain the neck angle joint such that they are distributed over the cervical vertebra (Vasavada). In distributing the angles between the vertebrae, there may be a more accurate representation of the forces acting on each vertebrae. In this project, we are aiming to trying to determine whether or not if distributing the joint angles is significant in determining the overall kinematics of the neck angles. 

Research Questions

How can we integrate biomechanical models with the Kalman filters to more accurately sense orientations from IMUs?

Does filtered IMU position data become less accurate at higher speeds?

Can multiple IMU's be used to measure neck angle?

Does the way the improved XSens and Vasavada head/neck model constrains the biomechanical model (distribution of neck angles on vertebra) agree with the kinematic data shown from motion capture?

How does the improved Vasavada head/neck model compare with the results from the XSENS IMU system for neck angle?

Does the way the model constrains agree with the kinematic data shown from motion capture? Same position change in videos (validation), compare to XSENS. VASVADA how important is disturbiting all that vertebra

Methods

In order to look at the kinematics of the head/neck, we plan to synchronously collect motion capture data with markers and IMU data at a rate of 40 bpm, 80 bpm, and 120 bpm. The motion capture data will be used as the ground truth which can be used to calculate joint angle data that represents what it actually happening during neck flexion. The IMU data will be used to capture joint angle data of the XSens two-pivot neck model. In addition, these same joint angles will be used as inputs for the Vasavada OpenSim model to capture position and joint angles of a multi-joint neck model. 

To characterize the precision of the IMU's, we collected data at different speeds and will compare the accuracy between each speed. To compare the models, we will be taking the relative changes in joint angles to see if each model is capturing the same kinematics as the motion capture. 

Progress

We started our project by collecting data on knee flexion. We wanted to show that using IMUs, you are able to capture the joint angle data, but there exists some error using IMUs. In our project, we wanted to model the head and neck flexion as a pin joint, just as a starting point. In order to show the same issues with using IMUs as orientation trackers, we began the project by illustrating the issues with data from a simple 2D knee flexion movement, which can also be modeled as a pin joint. The methodology used to calculate the angles between the bodies in this section can also be used to calculate the angles between the rigid bodies in the neck model.

Examining knee flexion with IMU's and Motion Capture 

  1. Collected experimental data of knee flexion using high speed motion capture camera (120 bpm, 80 bpm, 40 bpm) and XSENS IMU tracking system.
    By using high speed mo-cap, markers on the shank, knee joint, and thigh, and a goniometer, we were able to capture the ground truth of the biomechanical movement and state data during knee flexion.We used knee flexion as a starting point as it was easier to collect the data since the XSENS system has marker data for the thigh and shank. These steps were also a starting point for another group project (Using Inertial Measurement Units to Calculate Knee Flexion Angle) and steps for the knee flexion experiment can be found there. 

  2. Processed the motion-capture data in the knee flexion/extension to capture knee angle with markers


    The above video shows the knee flexion set-up and movement as well as the trackers (Yellow - thigh, Green - knee joint, Blue - Shank). We can see the knee has a range of motion (ROM) from around 10 degrees to 80 degrees. Using the 2D marker position data, we can find the vector from the thigh marker to the knee joint and the vector from the knee joint to the shank marker. From here, we can calculate the angle between the two vectors and estimate the knee joint angle with the equation: cos (θ) = dot (v,w)/ norm(v,w). The plot below shows the knee angle over time. We can see that it follow what we have observed on the video.

      1. Processed the raw IMU data of knee flexion to obtain position by integrating the linear acceleration and angular velocities
        In order to show the drift error inherent in the IMU raw data, we needed to take the raw data from the IMUs and integrate to get position data. This step was to illustrate the error. 

Figure 1. Linear accelerations and angular velocities from the raw data of the IMUs placed on the upper and lower leg

The linear acceleration of the upper leg is fairly constant, which is what we expect because in the experiment, only the lower leg is moving to flex the knee. The linear acceleration is also correct; however, in both cases the raw data is very noisy. If we take this data and integrate it, we would expect to see position data similar to what was captured in the motion capture video. We expect the upper leg to have be relatively constant and have minimal changes in position while the lower leg should have an oscillatory changes in position. The plot below shows the integrated angular velocity for the upper and lower leg.

Figure 2. The positions of the upper and lower leg from the processed (integrated) angular velocities of the IMUs

And behold, we see that the lower leg after some timesteps begins to drift away from this constant value. This is the systemic error that we expected due to the integration performed on the raw data. SImilarly, in the lower leg we can see some sort of drift nearing the later timesteps. The raw data can be compared to the processed data that comes with the XSENS software, but more on this later. The data shows us some of the basis for our motivations in finding a way to detect more accurate orientation estimates using IMUs.

 

Now that we've looked into some of the raw IMU data and difficulties with IMU data, we will start by using the processed IMU data that is output by the XSENS system, which presumably applies some blackbox filtering that filters some of the errors inherent in the data (We should put a plot of the the processed sensor data against some of the raw sensor data). From the processed IMU orientation data, we will calculate sagittal neck joint angle from assuming a biomechanical model where the torso and head are rigid bodies connected by a pin joint. We want to take this neck joint angle and fit it into the improved Vasavada model, which constraints the model in a way that distributes the single neck angle across several vertebrae. From there, we can compare the model's kinematics to ground truth data from motion capture and even the results from XSENS.

Examining neck flexion with IMU's and motion capture for the XSens and Vasavada biomechanical models

1. Collected experimental data of neck flexion using high speed motion capture camera (120 bpm, 80 bpm, 40 bpm) and XSENS IMU tracking system.

By using high speed mo-cap, markers on the shoulder, IMU, neck, cheek, and forehead, we were able to capture the ground truth of the biomechanical movement and state data during neck flexion. We used an altered version of the experiment we used to detect to detect neck flexion as in the first part. The motion capture was processed using Kinovea; since we didn't use a goniometer, we used the software to capture the angle and position data. The motion captured with the XSENS IMU data was done with the subject flexing and extending their neck at different rates.

Figure 3. above shows the experiment set-up with the high-speed video capture (120 fps) and marker/IMU setup on the subject.

We put markers on the shoulder, the head, and the spots where we estimate the C1, C3, and C7 cervical vertebrae positions to be. We also had the subject flex and extend their neck at 40 BPM, 80 BPM, and 120 BPM to get motion data via mo-cap and IMUs at different speeds. From the marker data, we can extrapolate head positions as well as joint angles.

2, Processing the IMU  data to get processed joint angle data of the neck

The XSens IMU tracking outputs joint angle data for C7T1 and C1Head. We plan to use the joint angles we captured and put them into the OpenSim Vasavada model to calculate positions. of the head and compare these positions to the the motion capture data. The specific positions of each vertebrae were also used to calculate the joint angles of the vertebrae.The XSENS data for position and orientation are quaternions of the body segment positions and body segment angles, respectively. The XSens data will be kept as is, using the processing that is built into the software; however, we're using these same angles as the inputs to the OpenSim Vasavada neck model. The Vasavada model will then distribute these neck angles along the cervical vertebrae due to the constraints of the model (Vasavada).

OpenSim model with the joint angles from the XSens IMU's

Figure 4. Vasavada OpenSim neck model with the .mot file with the joint angles collected from the IMUs

Side note: The way Vasavada model works, there's a function between each joint at angles. For example, if you adjust a joint by an angle of 1, the other angle would adjust by different factor. To make it a two link model, we could reduce the weight to 0, which could limit the number of dependent variables. 

In order to input the XSens IMU data into OpenSim, we used a MATLAB script to write a .mot file that controls pitch2 and pitch1, which corresponds to the movement of the C7T1 and C1head joint in the sagittal plane respectively. We had to change the .osim file available from Vasavada to remove the constraints placed on the C1Head and C7T1 joints as the joint angles obtained from the IMUs exceeded the range of motion the model could handle, resulting in clipping. Removing these constraints removed the errors from clipping. From there, the Analyze tool was used to obtain the BodyKinematics of the OpenSim model, where we are able to obtain the positions of the skull and each cervical spine. Using the same calculations from the knee joint as stated above, we used the positions of the T1, C1, C3, C7, and skull to obtain three joint angles. The T1 was constant and used as a reference point to represent the torso, which did not move in the biomechanical model. The joint angle between T1, C1, and C3 was obtained and named C1. The joint angle between C1, C3, and C7 was obtained and named C3. The joint angle between C3, C7, and skull was obtained and named C7. 

We took these angles as a basis for comparison because these are the angles that the XSens software were able to output. The angle C3 can only be compared with the motion capture data.

3. Processing the marker data to get processed joint angle data of the neck

The marker data outputs position data along the head/neck. In order to get joint angle data, we took vectors between each markers and treated them as rigid bodies between joints. 

Figure 5. Illustration of the vectors and joint angles calculated from the motion capture data

In the Figure 5, the vectors are depicted between each markers. The light blue shows the joint angles we calculated. In order to estimate the position of head, we averaged the positions of the markers on the cheek and head. 

With collecting data from the XSens, the OpenSim model, and our marker positions, there are errors that come with comparing the joint angles. Each dataset has their own reference coordinates and methodology of calculating the joint positions. For example, depending on where you place marker positions on a rigid body, the computed angle between two vectors can greatly differ, although the change in angle should remain the same. Additionally, the XSens IMUs estimate the neck angles with placements on the shoulder, back, and forehead, meaning they may have another way of positioning the angles; one way to accommodate, characterize, and compare the neck angles of each set is to systematically demean the data. By removing this constant, we can track the relative joint angle changes. The image above visualizes what demeaning the oscillatory neck flexion looks like. This was implemented in MATLAB.

Figure 6. Demeaning a signal

4. Results

Figure 7. Study results. A) We can see that distributing the C1 and C7 neck joint angles over the cervical spine much more accurately predicts the C3 joint angle than assuming that the C3 joint angle is fixed (i.e., rigid body). Similarly, the Vasavada model more accurately predicts the C1 joint angle than the Xsens 2 pivot model. However, the Vasavada model greatly underestimates the C7 joint angle. This could be because we did not have the ground truth C7 marker in the anatomically correct location. B) From this plot we see that increasing the speed at which the subject moved her head decreased the accuracy of the Xsens data capture system.

 

Bibliography

 Vasavada, Li, and Delp. "Influence of muscle morphometry and moment arms on the moment-generating capacity of human neck muscles." Spine, 1998.

 Young, Alexander D. "Wireless realtime motion tracking system using localised orientation estimation." (2010).

 


In order to show the drift error inherent in the IMU raw data, we needed to take the raw data from the IMUs and integrate to get position data. 

OpenSim is supported by the Mobilize Center , an NIH Biomedical Technology Resource Center (grant P41 EB027060); the Restore Center , an NIH-funded Medical Rehabilitation Research Resource Network Center (grant P2C HD101913); and the Wu Tsai Human Performance Alliance through the Joe and Clara Tsai Foundation. See the People page for a list of the many people who have contributed to the OpenSim project over the years. ©2010-2024 OpenSim. All rights reserved.