A sign test can be run on a single or paired numeric response variable with *any *distribution to test if the population median (or median difference of paired measurements) is equal to a specific claimed value. This test is an alternative to the one-sample *t*-test or paired *t*-test when the data fails the normality assumption, or if the sample size is too small to assess normality.

For example, we could run a sign test if we wanted to see if students on average get eight hours of sleep on week nights, but the distribution of our sample values was skewed. A sign test determines how likely it is to get the number of observed values above or below the claimed median value, given that the median actually equals that value.

**Hypotheses:**

*H*_{o}: The population median equals the claimed value.

*H*_{A}: The population median does not equal the claimed value.

Because testing if the median of a distribution is equal to some number is equivalent to testing if the proportion of values either less than or greater than that number is equal to 0.50, running a sign test is equivalent to running a binomial test with a null hypothesis that the proportion of “successes” is equal to 0.5 (though the interpretation of the results will differ).

**Assumptions:**

**Example 1: Hand calculation video**

This video shows how to run a binomial test to find if the true proportion of leopards that have a solid black coat color differs from 0.35. A sign test is run the same way, except the null hypothesis would be that the true proportion of values that is either greater than or less than the median is equal to 0.5.

**Example 2: How to run in RStudio**

Dataset used in video

R script file used in video

Sample conclusion: With a *p*>0.05, we have no evidence to suggest that the median number of days dogs spend in the shelter is different from seven.