Computer-aided method for calculating animal configurations during social interactions from two-dimensional coordinates of color-marked body parts
Jazyk angličtina Země Spojené státy americké Médium print
Typ dokumentu časopisecké články, práce podpořená grantem
PubMed
11591068
DOI
10.3758/bf03195390
Knihovny.cz E-zdroje
- MeSH
- chování zvířat * MeSH
- odchylka pozorovatele MeSH
- počítačové metodologie * MeSH
- pozorování metody MeSH
- prasata MeSH
- prostorové chování * MeSH
- reprodukovatelnost výsledků MeSH
- sociální chování * MeSH
- software MeSH
- videozáznam MeSH
- zvířata MeSH
- Check Tag
- zvířata MeSH
- Publikační typ
- časopisecké články MeSH
- práce podpořená grantem MeSH
In an experiment investigating the impact of preweaning social experience on later social behavior in pigs, we were interested in the mutual spatial positions of pigs during paired social interactions. To obtain these data, we applied a different colored mark to the head and back of each of 2 pigs per group and videotaped the pigs' interactions. We used the EthoVision tracking system to provide x,y coordinates of the four colored marks every 0.2 sec. This paper describes the structure and functioning of a FoxPro program designed to clean the raw data and use it to identify the mutual body positions of the 2 animals at 0.2-sec intervals. Cleaning the data was achieved by identifying invalid data points and replacing them by interpolations. An algorithm was then applied to extract three variables from the coordinates: (1) whether the two pigs were in body contact; (2) the mutual orientation (parallel, antiparallel, or perpendicular) of the two pigs; and (3) whether the pig in the "active" position made snout contact in front of, or behind, the ear base of the other pig. Using these variables, we were able to identify five interaction types: Pig A attacks, Pig B attacks, undecided head-to-head position, "clinch" resting position, or no contact. To assess the reliability of the automatic system, a randomly chosen 5-min videotaped interaction was scored for mutual positions both visually (by 2 independent observers) and automatically. Good agreement was found between the data from the 2 observers and between each observer's data and the data from the automated system, as assessed using Cohen's kappa coefficients.
Citace poskytuje Crossref.org