Metrics for dominance interactions 2: Fighting success, Clutton-Brock et al. (1979)

dominance_iconThis is the second in a series of blog entries exploring the metrics used for assessing dominance hierarchies: see the introductory post for the rationale behind doing this, with other metrics visible through the index page.

Clutton-Brock et al. (1979) were interested in giving a metric to fighting success in red deer stags, where individuals were studied over long periods of time. Studying any network system over longer periods of time is going to cause a problem, as the status of individuals may change during that period (see Rands 2014 for some discussion of this problem), and the authors of this paper were aware that a male’s dominance could change within a mating period as his energy levels flagged or he became injured. Simply counting the number of fights won and lost will not give a very accurate reflection of how an individual is placed within the herd, as his success is also going to depend upon the idntities of the individuals he beats: a male who consistently fights and wins against weak opponents is not necessarily going to be of similar quality to a male who consistently fights and wins against strong opponents. So, Clutton-Brock and colleagues designed a simple metric that takes account of the quality of opponents individuals win and lose against.

I’ll illustrate how this is calculated with by considering the fighting ability metric of two individuals (labelled black and blue) within the following group structure:

Figure 1: all winner/loser interactions recorded. As depicted in the box, the arrow denotes which individual is the winner (W) or loser (L) in a connected pair.
Figure 1: all winner/loser interactions within a group. As depicted in the box, the arrow denotes which individual is the winner W or loser L in a connected pair.

To gauge an individual animal’s fighting success, you need to calculate B, the number of other animals that the focal individual has won against, and note the identities of all the losers. For each of these marked losers, you also need to calculate the number of individuals that they in turn have beaten, and sum these to give Σb. Because we define one individual in an interacting pair as a winner, and the other a loser, this means that none of the summed interactions contributing to Σb are against the focal individual.

As well as assessing wins, you also need to calculate L, the number of other individuals that the focal loses against. These winning animals are marked and the summed number of animals that they themselves lose against is calculated, giving Σl.

Having collated these numbers, the fighting success of a focal individual (which I will refer to as DCB) is calculated as

DCB = (B + Σb + 1)/(L + Σl + 1),

where the “+1” term on both the top and bottom of the equation allows a meaningful metric to be calculated for individuals that are either never seen to win or lose.

Using the group interactions given in Figure 1, we calculate DCB for the individual coloured black using the following reasoning:

Figure 2: Calculating black's win (left panel) and loss (right panel) statistics
Figure 2: Calculating black’s win (left panel) and loss (right panel) statistics

Following Figure 2, we see that B = 8, Σb = 2 + 2 + 2 + 1 + 1 + 0 + 0 + 0 = 8, L = 3, and Σl = 2 + 2 + 0 = 4. So, DCB = (8 + 8 + 1)/(3 + 4 + 1) = 2.125 for the black individual. Similarly, using the reasoning given in Figure 3, DCB =1.167 for the blue individual.

Figure 3: Calculating blue's win (left panel) and loss (right panel) statistics
Figure 3: Calculating blue’s win (left panel) and loss (right panel) statistics

A larger value of DCB will notify a greater fighting ability, and the maximum size of the statistic within an observed group is going to depend on both the size of the group and the maximum number of other animals that each individual in the group interacts with.   In their original paper, Clutton-Brock and his colleagues found DCB for red deer ranged between 0 and a little over 3.

This is a simple statistic to compute, but I would caution that it should really only be used for comparing individuals within a group, given that it is dependent upon both group size and number of interactions recorded. The metric is also dependent upon observed relationships being fixed: an individual that wins an interaction will always win future interactions with the same opponent. This suggests that caution should be used if this metric were to be transferred to observed interactions where the dynamic between a dyad could change over time.

Further reading

Clutton-Brock TH, Albon SD, Gibson RM & Guinness FE (1979). The logical stag: adaptive aspects of fighting in red deer (Cervus elaphus L.). Animal Behaviour 27: 211-225 | pdf

Rands SA (2014). We must consider dynamic changes in behavior in social networks, and conduct manipulations: comment on Pinter-Wollman et al. Behavioral Ecology 25: 259-260 | full text | pdf

Technical Note: The network diagrams were drawn on a Mac with Dia Diagram Editor (open source freeware), and coerced into nice smooth images with GIMP (GNU Image Manipulation Program: open source freeware).

Hacking together a cheap but effective infra-red camera

eyesMany animals annoyingly do things differently in the dark than in the light. This causes problems for both field- and lab-based behavioural biologists, as few are blessed with the power of night vision. However, there are ways around this problem. One old-tech method is to observe them using red light, making the assumption that the beastie you’re observing can’t see at these longer wavelengths, but you can. However, a number of studies suggest that this a flawed assumption (e.g. 1, 2). So, how do you see in the dark?

Using electromagnetic radiation that has wavelengths outside the visual spectrum of the animals is one solution. Night vision and other special cameras are able to detect infrared radiation, which has a longer wavelength than visible light. This IR may be radiated as thermal energy by the animal (this is what heat-detecting cameras register), or could be being reflected by the animals and their environment (which is what many night-vision and CCTV cameras do, using an additional source of IR such as a set of IR-emitting LEDs, like the ones found in TV remote controls).

We and most other animals are unable to see these longer IR wavelengths, so these cameras are essentially detecting ‘invisible’ information that is unavailable to us. Any extra information about the environment that could improve your chances of finding food or avoiding predators is going to be useful if it can be detected, and there are of course some animals that can detect IR3: vampire bats, several families of snake and a range of butterflies, beetles and bugs have well-researched abilities to detect IR. Similarly, some prey species have evolved counteradaptations to confuse species that are able to detect IR, as can be seen4 in the anti-pit viper tail-flagging displays of ground squirrels. However, given that many animals are insensitive to IR wavelengths (we assume – it doesn’t hurt to check and double-check if you’re working with a particular species!), using an IR camera is a good first step for observing their behaviour unobtrusively, and many bits of kit are commercially available that allow you to observe nesting behaviour or remotely capture images of animals in the wild (I have no intention of recommending any of the products out there, but you should be able to find hundreds of commercial types available by running a web search for ‘camera trap’, ‘nest cam’ or ‘trail camera’).

Making your own IR-sensitive webcam

However, commercially available kit ranges in price and can be expensive. If you’re piloting a bit of work and want to test things out before you write the big grant, buying one or multiple specially designed cameras may be that little bit too expensive. It is however extremely easy to build your own IR camera using a little bit of initiative. The sensors on webcams and other digital cameras are sensitive to IR, but usually have some sort of filter between the lens and the sensor that blocks out unwanted non-visible electromagentic radiation: it is easy to remove this in some cameras, but can involve some delicate and potentially destructive scraping in others. A quick hunt online will give you access to loads of text and video tutorials on how to do this (here’s a good one from the Naked Scientists). Once removed, there’s nothing else you need to do – the sensors will be registering IR, and probably displaying it as extra purplish light if you’re able to see the camera’s output.

I’ve played with a few different cheap webcams available, and acknowledge that they do differ in both quality and ease of modification. Because sensor types and the way light is filtered onto them differs from camera to camera, I’d recommend shopping around for a camera that has a bit of depth to it as it is likely to have a separate filter that can be easily chipped out, rather than one that has to be scraped directly off the sensor (which will probably damage teh sensor). Removing the filter will also affect the depth at which incoming visible and IR light is focussed on the sensor, and I’d suggest hunting for a camera where the lens is focussed manually rather than by having to use a software interface.

I ended up settling for the Tecknet® 1080P HD Webcam, which was very cheap, is very easy to take apart and alter, has a decent pixel size and image quality, and is easy and stable to focus. These webcams are extremely easy to modify. After removing the front, the focussing lens can be unscrewed (below, A), revealing the filter (the shiny square bit in B) that blocks IR wavelengths from reaching the sensor. There’s probably a neater way of doing this, but it is easy to carefully chip this out with the end of a screw-driver (CBlue Peter warning: make sure you’re wearing eye-protection and using appropriate safety equipment while you’re doing this, and get an adult to help you). Then, simply reassemble and plug into your favourite operating system. I’m fairly certain this voids the warranty though…

removing the IR filter

Hardware for running your IR camera

It’s not just the camera that’s important if you’re trying to build cheap functional kit – you also need something to run it from. You could technically run it on anything that has the correct webcam driver installed (hint: if you’re a Mac user and can’t find the right driver, try running the camera from within Skype, which may well be able to run it). However, since we’re aiming for budget kit here, I’ll give a quick description of a system I’ve put together that runs from a Raspberry Pi – a tiny, inexpensive (GB£20, €26, US$25) computer that runs open-source LINUX-based software*, which means that the system is incredibly portable and can be altered to run off a battery in the field (useful if you’re building a cheap and effective camera traps for IR and/or visible wavelengths, like those deployed by ZSL for monitoring black rhino). Furthermore, because you’re building it yourself, it doesn’t require system administrators to install things for you (a major time-lagging factor for many researchers working in larger institutions!).

I’m assuming here that you’ve managed to get your Raspberry Pi up and running, formatting your SD card with something like Raspbian, and are happy with using a command-line tool (if you’re running a graphic-based interface, you can get at this using one of the ‘Terminal’ applications such as LXTerminal). I’m also assuming that you have managed to connect your system to the internet, as you’ll need to download some software. If you’re intending to set this up using a monitor rather than remotely, and want to be able to remove the monitor at some point during the camera’s use, it’s worthwhile getting the Model B Raspberry Pi with two USB ports, which means that you don’t need to detach the camera at any point.

Software for running your IR camera

Some bits of easily obtainable software are useful if you’re running this off a Raspberry Pi. Firstly, if you want to have a direct feed from the camera which is visible on a monitor in front of you, try something like Camorama. Assuming you have an internet connection, you can install this on your computer by entering

sudo apt-get install camorama

and answering ‘yes’ at appropriate moments. To run it, simply type ‘camorama’ into the terminal: as well as a direct feed and a point-and-click interface that allows you to play with the visual balances, you can take jpeg images too. If you want to record mpeg-format videos, you could try a program such as LUVCView instead, which you install and run in a similar manner (by replacing the word ‘camorama’ with ‘luvcview’ in the commands described).

If instead you want to either take still photos at regular intervals, or use your camera as a motion-triggered device (useful for camera traps), I recommend starting off with Motion, which you need to initially install using

sudo apt-get install motion

To run this at the default setting (where the camera is triggered by motion), just type ‘motion’ into the terminal once you have installed the software. As this package is run from the command line, you can create a text file that details exactly how you want the camera to be configured. For example, if you want to run the camera so that it doesn’t react to motion, but instead captures an image every quarter of a second, you can set up a configuration file using a word processor such as nano:

sudo nano motion.config

and typing the following:

framerate 4
output_all on

which you then save a the file ‘motion.config’ (using ctrl+x). The first command above sets the maximum number of frames per second that the device captures, and the second tells the system to turn off the motion-detection capability of the software, and instead take continuous images.

Having created the motion.config file, you then run motion by entering

motion -c motion.config

It may take a few seconds to start up, and you should then get a display whenever a file is written. To stop the program when it’s running, open another terminal window and type

killall motion

Motion is relatively simple to use, and has a good list of configurations that you can play with: simply reopen the config file you’ve created using the same commands, and alter the text. There’s the option of creating a timelapse video within this as well, but you are limited to only being able to use images that are a second or more apart.

A final word, and some caveats

A note for the coding purists out there: the description given here is written to enable non-coders to put together something that works with a minimum of poking. I am fully aware that there are other more elegant ways of doing this, and many other forms of software that can be used, but this should give a first step to enable a stressed lab/field scientist to have something functioning quickly. I am not willing to give any advice on this or similar applications, and accept no responsibility or liability if you follow these instructions and end up damaging your equipment or yourselves in any way: you follow them entirely at your own risk.

Having made your IR camera and worked out how to run the software behind it, you can then deploy it – you will need an IR light source too, and there are many available out there designed for CCTV systems (make sure you’re using them safely though). It looks like things may soon be made even easier by with the introduction of a super-cheap (US$25) IR camera specifically designed for the Raspberry Pi, which will hopefully be well supported within the Raspberry Pi user community. I’ve currently got some undergraduate project students trying out this kit in the lab, on a neat system that may be very nice for observing social and group behaviour – some more on this soon!

Further reading

1. Gibson G (1995). A behavioural test of the sensitivity of a nocturnal mosquito, Anopheles gambiae, to dim white, red and infra-red light. Physiological Ecology 20: 224-228. doi:10.1111/j.1365-3032.1995.tb00005.x

2. Heise BA (1992). Sensitivity of mayfly nymphs to red light: implications for behavioural ecology. Freshwater Biology 28: 331-336. doi:10.1111/j.1365-2427.1992.tb00591.x

3. Campbell AL, Naik RR, Sowards L & Stone MO (2002). Biological infrared imaging and sensing. Micron 33: 211-225. doi:10.1016/S0968-4328(01)00010-5

4. Rundus AS, Owings DH, Joshi SS, Chinn E & Giannini N (2007). Ground squirrels use an infrared signal to deter rattlesnake predation. Proceedings of the National Academy of Sciences of the USA 104: 14372-14376. doi:10.1073/pnas.0702599104

Metrics for dominance interactions 1: Zumpe and Michael’s ‘Dominance Index’ (1986)

dominance_iconThis is the first in a series of blog entries exploring the metrics used for assessing dominance hierarchies. The previous introductory post explains the rationale behind doing this. An index page will give detailed links to other metrics within this blog.

I’ll start with giving details for how to calculate the Dominance Index metric described by Zumpe & Michael (1986), which uses counts of agonistic encounters to generate individual scores, which can then be used to suggest a hierarchy. Using this technique is possible with pen and paper, so I may be giving a bit more detail of the nuts and bolts than I will with some of the more complex metrics. This technique is intended to give the user a ‘cardinal ranking’ – rather than just sorting the interacting individuals into a ranked order, this technique provides a way of assigning each individual a weighting statistic. The authors suggest that this could be useful for assessing how individual dominance changes over time, if different datasets are used.

The agonistic data required for this statistic are the counts of aggressive/dominant and submissive behaviours between all possible pairings of the group members. These should be collected in two tables. So, for an example group with four individuals (identified as A, G, H, and K), we tally the number of aggressive behaviours committed by each individual to each of its three group members:

table_1
Table 1

For example, A directs aggression towards H on seven occasions. Note also that no aggressive interactions are observed between G and K.

Similarly, we tally the number of submissive behaviours directed towards each individual by the other three group members:

Table 2
Table 2

For example, H displays submissive behaviours to G on 11 occasions. Note also that no submissive behaviours were recorded between H and K.

Having collected this information, we now calculate the percentage of the aggressive actions between pairs of individuals that each individual directs at the other. For example, within the pairing of A and G, nine aggressive acts are recorded (five by A, and four by G). A is aggressive towards G in 55.6% of their aggressive interactions (= 5 / 9), and G is aggressive towards A for 44.4% (= 4 / 9). If we work out these two percentages within each pairing, we can build up a table giving the percentages of aggressive behaviours given by each individual. If no aggression is seen within a pair, the two corresponding table entries for the pair should be marked as ‘null’, as is given for the two entries between G and K here:

Table 3
Table 3

Similarly, the percentages of submissive actions received by individuals within each pairing should also be calculated. Again ‘null’ values should be recorded where no submissive actions within a pair were observed, as seen between H and K here:

Table 4
Table 4

The aggression/submission percentages are then combined by calculating an average aggression/submission score for each possible pairing of group members. For example, the average score for A when it interacts with G is

65.3% = (55.6% + 75.0%) / 2.

If no aggressive actions are recorded for a pair, this average is simply given the value of the percentage of submissive actions (calculated in table 4). So, the average score for G when it interacts with K is 90.9%. Similarly, if no submissive actions are recorded between pair members, the average is assumed to be the percentage of aggressive actions committed by an individual (recorded in table 3). So, the average score for H when it interacts with K is 71.4%. Calculating all possible pairing, we get:

Table 5a
Table 5a

Finally, the Dominance Index for each of the group members is calculated as the mean of the averages calculated for each focal individual, as given in table 5b. For example, the dominance index for A is calculated as 73.4% = (65.3% + 69.2% + 85.7%) / 3.

Table 5b
Table 5b

From this, we can use the Dominance Index rankings to construct a hierarchy. In this case, A > G > H > K.

The metric falls apart when there are no aggressive or submissive acts recorded within a pairing, which means that no average score can be recorded in table 5a. This could potentially be remedied by observing the interacting individuals until some agonistic interaction is recorded, but it may be that the non-interacting individuals are able to assess each other without needing to interact (using alternative cues, or through recognising each other from earlier unrecorded interactions). This metric is therefore not ideal if some individuals do not interact with others.

Similarly, a dataset which records few interactions between individuals may be biased by a few anomalous recorded encounters. However, using mean percentages (as calculated in table 5b) removes biases that could be introduced by simply scoring the overall number of ‘wins’ in dyadic agonistic encounters for each individual, which may be incorrectly inflated by many interactions with a small subset of the group members. I’m also curious to see what happens when a group consists of two or more subgroups where interactions tend to be within rather than between subgroups.

Further reading

Zumpe D & Michael RP (1986). Dominance index: a simple measure of relative dominance status in primates. American Journal of Primatology 10: 291-300 | doi: 10.1002/ajp.1350100402

Metrics for dominance interactions: introduction

dominance_iconMany interactions between group-living individuals can be influenced by hierarchies that exist between the interactors. These interactions can be measured in lots of different ways, and once measured, whatever has been scored needs to be processed to give a reproducible estimate of the shape of these interactions.

What this means in practice if you’re starting a new project with a new study organism is that you spend a lot of time thinking about what behaviours to record, and how to record them, but don’t really give consideration to the means of crunching these numbers down to something meaningful at the end. Good experimental design implies that the analysis has been considered during the design of the experiment, but this intermediate stage of generating ‘raw’ information about any hierarchies that are in place may be left out, meaning that something has to be cobbled together post hoc after the work has been done. This is never ideal!

Having supervised a fair number of projects where exactly this has been done, I’ve decided to try and get my head around the various statistics out there that are designed for assessing and ranking hierarchies. Some of these are fairly straightforward, and some are slightly more involved, dipping into social network analysis and other emerging fields in animal behaviour. To make this a useful exercise, I’ll attempt to put together a how-to guide for using them, aimed at researchers with a mixed range of skills in manipulating numbers, and where time permits, I’ll try and add in some practice examples. How well this works depends upon both my own understanding, the time I have available, and the limitations of inputting maths into a WordPress blog!

What I won’t be doing (at least, initially) is being particularly critical about which techniques work best: this is a voyage of discovery for me too! I also won’t be focussing on what dominance is for, why it exists, and how it does or doesn’t drive particular group behaviours (but I do discuss how leadership decisions don’t necessarily depend upon the hierarchy present in Rands et al. 2008 and Rands 2011). This series of blog postings will take a little time to put together (An index page will give detailed links to other metrics within this blog), so if you’re looking for general advice on the sort of indices that are out there, I strongly recommend hunting down a copy of the excellent book by Hal Whitehead (pages 186-195 in particular).

Further reading

  • Rands SA, Cowlishaw G, Pettifor RA, Rowcliffe JM & Johnstone RA (2008). The emergence of leaders and followers in foraging pairs when the qualities of individuals differ. BMC Evolutionary Biology 8: article 51 | abstract | pdf | full text
  • Rands SA (2011). The effects of dominance on leadership and energetic gain: a dynamic game between pairs of social foragers. PLoS Computational Biology 7: e1002252 | full text | pdf
  • Whitehead H (2008). Analyzing animal societies: quantitative methods for vertebrate social analysis. Chicago: University of Chicago Press