<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0">
<channel>
<title>MultiCo</title>
<link>http://hdl.handle.net/11321/945</link>
<description>MultiCo - Data Collection for DARIAH KPO</description>
<pubDate>Wed, 15 Apr 2026 17:26:34 GMT</pubDate>
<dc:date>2026-04-15T17:26:34Z</dc:date>
<item>
<title>MultiCo-Hub: a corpus of multimodal enrichments with motion-trajectory annotation</title>
<link>http://hdl.handle.net/11321/964</link>
<description>MultiCo-Hub: a corpus of multimodal enrichments with motion-trajectory annotation
Klessa, Katarzyna; Karpiński, Maciej; Jarmołowicz-Nowikow, Ewa; Sawicka-Stępińska, Brygida; Klessa, Wojciech
MultiCo-Hub is a multimodal dataset including 11 zipped subsets (henceforth: sessions) of time-aligned audio, video and motion-capture–derived BVH data, together with multi-layered Annotation Pro files (ANTx) extended with automatically extracted motion-trajectory layers. &#13;
&#13;
The dataset includes a dedicated training session demonstrating body movement (TESM_001). &#13;
&#13;
The video and audio files included in remaining 10 sessions are derived from the MultiCo corpus (http://hdl.handle.net/11321/942). The original MultiCo sessions were enriched  by means of:&#13;
- full audio, video and BVH streams synchronization to enhance precise multimodal analysis;&#13;
- motion-capture (BVH) data normalization, conversion, and integration directly into the annotation files as layers describing trajectories of selected body parts (positions, speeds, gesture-space coordinates). &#13;
&#13;
Furthermore, for each session, the corpus provides a composite multi-view video file showing all four camera angles simultaneously. This makes the dataset easier to inspect and substantially more accessible for users working on standard-performance computers.&#13;
&#13;
MultiCo-Hub offers a compact, ready-to-use resource for research and education in the areas of speech–gesture coordination, gesture space, temporal properties of movement, communicative alignment of interlocutors, and multimodal interaction. &#13;
&#13;
Export to common formats (TextGrid, EAF, CSV, etc.) is supported via Annotation Pro, facilitating downstream statistical analysis, visualization and interoperability.&#13;
&#13;
The MultiCo-Hub set also served as input for developing a set of R and C# applications and scripts that support the analysis and visualization of gesture space, temporal movement properties, and communicative alignment in dialogue.
</description>
<pubDate>Wed, 01 Jan 2025 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/11321/964</guid>
<dc:date>2025-01-01T00:00:00Z</dc:date>
</item>
</channel>
</rss>
