Skip to content

Solution to Cornell University's Virtual Reality Research Competition Using Our Site's Artificial Intelligence for Animation

Boost underwater motion studies through AI animations on our site, aiding Cornell's VR lab in enhancing precision, speed, and reducing expenses.

Solution to Cornell University's Virtual Reality Research Contest Utilizing Site's Artificial...
Solution to Cornell University's Virtual Reality Research Contest Utilizing Site's Artificial Intelligence for Animation

Solution to Cornell University's Virtual Reality Research Competition Using Our Site's Artificial Intelligence for Animation

Cornell University has embarked on an innovative research project aimed at developing robotic assistants for scuba divers, and they've turned to our website AI for help. The project is a collaborative effort between the Lab for Integrated Sensor Control led by Prof. Silvia Ferrari and the Virtual Embodiment Lab led by Prof. Andrea Won.

The researchers at Cornell University are utilising our AI to more accurately identify the position, rotation, and orientation metrics of divers' limbs for underwater motion. This is achieved by integrating video, motion capture, and reconstructed translation motion data for their hydrodynamic model.

One of the key advantages of our AI is its ability to process video footage, making it possible to take existing archival video of scuba divers underwater and see the virtual animation motion data in minutes. This is a significant improvement over traditional methods that rely on specific motion capture datasets.

Our AI's FBX file export feature enables the Cornell University research team to take their animations into Unity, making it easy and seamless for them to use our AI in their research workflow. This allows them to expand their data repository and generate simulated data for many different divers, making the training data for a robot "diving buddy" more diverse, accurate, and robust.

The cost of our AI Animation is $30 per month, which is significantly lower than alternative motion capture tools. Setting up our AI was 7 times faster than alternative virtual animation software tools, further increasing its appeal for the research team.

The challenge faced by the research team was reconstructing underwater motion data from motion capture datasets, using the reconstructed data from videos in combination with XSense data, and finding or creating new underwater datasets that aligned with the research specifications. Our AI's AI Video to Animation tool proved to be more accurate and convenient than other software tools for this purpose.

With our AI, the research team could take a single video that captures a diver's movement and use AI Video to animation to get the movement data in 3D space with an avatar. They could then use our AI's video editor to add camera angles and visually inspect animations from multiple angles before exporting them. The AI's help in aligning the motion capture data with video data and becoming a source of truth reference for calculating motion.

The ultimate goal of this research project is to build a hydrodynamic motion model for robot buddies for scuba divers. With the aid of our AI, the researchers hope to create robots that can accurately navigate underwater, calculating factors such as speed, depth, force of movement in water, distance, and more.

The integration of AI and virtual reality technologies in this research project has the potential to revolutionise the field of underwater robotics, making robotic assistants for scuba divers safer, more efficient, and more effective. As AI and VR continue to evolve, we can expect to see more innovative applications in this area in the future.

  1. The researchers at Cornell University are using Canva's FBX file export feature to take their animations into Unity for a seamless research workflow.
  2. The AI Video-to-Animation tool from our platform was found to be more accurate and convenient than other software tools for reconstructing underwater motion data from motion capture datasets.
  3. The research team at Cornell University is using our AI to create robot buddies for scuba divers, with the aim of building a hydrodynamic motion model for improved navigation underwater.
  4. The AI's ability to process video footage, like MP4 files, allows researchers to convert existing archival video of scuba divers underwater into virtual animation motion data in minutes.
  5. The Cornell University research team chose our AI for its significant cost advantage of $30 per month over alternative motion capture tools, along with the 7-fold faster setup time compared to other virtual animation software.
  6. The research team could use our AI's video editor to add camera angles and visually inspect animations from multiple angles before exporting them, which aided in aligning the motion capture data with video data.
  7. Data and cloud computing technology are being relied upon to help support the huge amounts of diverse, accurate, and robust training data required for the robot "diving buddy."
  8. This research project, involving collaboration between Cornell University's Lab for Integrated Sensor Control and Virtual Embodiment Lab, holds the potential for revolutionizing the field of underwater robotics, with implications for medical conditions, science, and marketing applications.

Read also:

    Latest