More technically, the value of the p-value represents an indication of the reliability of a result.
The higher the p-value, the less we can believe that the observed relation between variables in the sample is a reliable indicator of the relation between the respective variables in the population. In other words, that our result really has just occured by chance and isn't an indication 'that something has happened' in relation to the background population generally.

Specifically, the p-value represents the probability of error that is involved in accepting our observed result as valid, that is, as "representative of the population."

For example, a p-value of .05 (that is, 5%, or 1 in 20) indicates that there is only 5% liklihood (= probabilit)y that the relation between the variables found in our sample is a "fluke" result, one that just happens to have occured by chance.

In other words, assuming that in the population there was no relation between those variables whatsoever, and we were repeating experiments such as ours one after another, we could expect that approximately in every 20 replications of the experiment there would be one in which the relation between the variables in question would be equal or stronger than in ours. (Note that this is not the same as saying that, given that there IS a relationship between the variables, we can expect to replicate the results 5% of the time or 95% of the time; when there is a relationship between the variables in the population, the probability of replicating the study and finding that relationship is related to the statistical power of the design. See also, Power Analysis). In many areas of research, the p-value of .05 is customarily treated as a "border-line acceptable" error level.