<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns="http://purl.org/rss/1.0/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/">
<channel rdf:about="http://hdl.handle.net/11321/945">
<title>MultiCo</title>
<link>http://hdl.handle.net/11321/945</link>
<description>MultiCo - Data Collection for DARIAH KPO</description>
<items>
<rdf:Seq>
<rdf:li rdf:resource="http://hdl.handle.net/11321/964"/>
</rdf:Seq>
</items>
<dc:date>2026-04-30T22:02:30Z</dc:date>
</channel>
<item rdf:about="http://hdl.handle.net/11321/964">
<title>MultiCo-Hub: a corpus of multimodal enrichments with motion-trajectory annotation</title>
<link>http://hdl.handle.net/11321/964</link>
<description>MultiCo-Hub: a corpus of multimodal enrichments with motion-trajectory annotation
Klessa, Katarzyna; Karpiński, Maciej; Jarmołowicz-Nowikow, Ewa; Sawicka-Stępińska, Brygida; Klessa, Wojciech
MultiCo-Hub is a multimodal dataset including 11 zipped subsets (henceforth: sessions) of time-aligned audio, video and motion-capture–derived BVH data, together with multi-layered Annotation Pro files (ANTx) extended with automatically extracted motion-trajectory layers. &#13;
&#13;
The dataset includes a dedicated training session demonstrating body movement (TESM_001). &#13;
&#13;
The video and audio files included in remaining 10 sessions are derived from the MultiCo corpus (http://hdl.handle.net/11321/942). The original MultiCo sessions were enriched  by means of:&#13;
- full audio, video and BVH streams synchronization to enhance precise multimodal analysis;&#13;
- motion-capture (BVH) data normalization, conversion, and integration directly into the annotation files as layers describing trajectories of selected body parts (positions, speeds, gesture-space coordinates). &#13;
&#13;
Furthermore, for each session, the corpus provides a composite multi-view video file showing all four camera angles simultaneously. This makes the dataset easier to inspect and substantially more accessible for users working on standard-performance computers.&#13;
&#13;
MultiCo-Hub offers a compact, ready-to-use resource for research and education in the areas of speech–gesture coordination, gesture space, temporal properties of movement, communicative alignment of interlocutors, and multimodal interaction. &#13;
&#13;
Export to common formats (TextGrid, EAF, CSV, etc.) is supported via Annotation Pro, facilitating downstream statistical analysis, visualization and interoperability.&#13;
&#13;
The MultiCo-Hub set also served as input for developing a set of R and C# applications and scripts that support the analysis and visualization of gesture space, temporal movement properties, and communicative alignment in dialogue.
</description>
<dc:date>2025-01-01T00:00:00Z</dc:date>
</item>
</rdf:RDF>
