Showing posts with label Motion Capturing. Show all posts
Showing posts with label Motion Capturing. Show all posts

16 July 2025

Interactive Media for Cultural Heritage

Recently, the latest edited book I co-authored with colleagues from CYENS – Centre of Excellence and the University of Cyprus was published by Springer Series on Cultural Computing. The book is entitled ‘Interactive Media for Cultural Heritage’ and presents the full range of interactive media technologies and their applications in Digital Cultural Heritage. It offers a forum for interaction and collaboration among the interactive media and cultural heritage research communities.

A close-up of a book cover

AI-generated content may be incorrect.

The aim of this book is to provide a point of reference for the latest advancements in the different fields of interactive media applied in Digital Cultural Heritage research, ranging from visual data acquisition, classification, analysis and synthesis, 3D modelling and reconstruction, to new forms of interactive media presentation, visualization and immersive experience provision via extended reality, collaborative spaces, serious games and digital storytelling.

More information:

https://link.springer.com/book/10.1007/978-3-031-61018-9

12 September 2023

Animal Motion-Capture Studio

An animal behaviour lab built inside a converted barn uses motion-capture cameras to track the movements and behaviours of entire flocks of birds or swarms of insects. The so-called SMART-BARN resembles a Hollywood motion-capture studio with 30 infrared cameras capable of tracking up to 500 individual markers attached to animal’s bodies. All of this takes place within an area one quarter the size of a standard basketball court, and which can include feeding stations and animal perches.


Researchers showed that their SMART-BARN lab can also track animals without any markers by using six video cameras and computer vision software based on artificial intelligence. The space also has 30 microphones to record animal sounds and even pinpoint animal locations based on sound. Experiments with homing pigeons, starlings and African death’s head hawkmoths tracked the real-time locations and body poses of each individual animal.

More information:

https://www.newscientist.com/article/2390334-animal-motion-capture-studio-tracks-bird-flocks-and-insect-swarms/

22 April 2023

Sensor Recognises Moving Objects and Predicts Path

Scientists at Finland's Aalto University have developed a neuromorphic visual sensor that can recognize moving objects in a single video frame and anticipate their trajectories. An array of photomemristors that generate electricity when exposed to light forms the heart of the sensor, the current's gradual decay after the light's removal enables the devices to recall recent exposures, providing a dynamic memory of the preceding instants. Current motion detection systems need many components and complex algorithms doing frame-by-frame analyses, which makes them inefficient and energy-intensive. Inspired by the human visual system, researchers have developed a new neuromorphic vision technology that integrates sensing, memory, and processing in a single device that can detect motion and predict trajectories. To demonstrate the technology, the researchers used videos showing the letters of a word one at a time. Because all the words ended with the letter ‘E’, the final frame of all the videos looked similar.

Conventional vision sensors couldn’t tell whether the ‘E’ on the screen had appeared after the other letters in ‘APPLE’ or ‘GRAPE’. But the photomemristor array could use hidden information in the final frame to infer which letters had preceded it and predict what the word was with nearly 100% accuracy. In another test, the team showed sensor videos of a simulated person moving at three different speeds. Not only was the system able to recognize motion by analysing a single frame, but it also correctly predicted the next frames. Accurately detecting motion and predicting where an object will be are vital for self-driving technology and intelligent transport. Autonomous vehicles need accurate predictions of how cars, bikes, pedestrians, and other objects will move to guide their decisions. By adding a machine learning system to the photomemristor array, the researchers showed that their integrated system can predict future motion based on in-sensor processing of an all-informative frame.

More information:

https://www.aalto.fi/en/news/a-neuromorphic-visual-sensor-can-recognise-moving-objects-and-predict-their-path

27 October 2022

360 LiDAR sensor

Researchers have developed a fixed LiDAR sensor that has a 360° view. This new sensor is drawing attention as an original technology that can enable an ultra-small LiDAR sensor since it is made from the metasurface, which is an ultra-thin flat optical device that is only one-thousandth the thickness of a human hair strand. Using the metasurface can greatly expand the viewing angle of the LiDAR to recognize objects three-dimensionally. The research team succeeded in extending the viewing angle of the LiDAR sensor to 360° by modifying the design and periodically arranging the nanostructures that make up the metasurface.

It is possible to extract 3D information of objects in 360° regions by scattering more than 10,000 dot array from the metasurface to objects and photographing the irradiated point pattern with a camera. This type of LiDAR sensor is used for the iPhone face recognition function. The study is significant in that the technology that allows cell phones, VR/AR glasses, and unmanned robots to recognize the 3D information of the surrounding environment is fabricated with nano-optical elements. By utilizing nanoimprint technology, it is easy to print the new device on various curved surfaces, such as glasses or flexible substrates, which enables applications to AR glasses.

More information:

https://techxplore.com/news/2022-10-solid-state-lidar-sensor-degrees.html