1

I have 2 samples. For each i calculate a number of objects, according to the time. I plot in the y axis the number of objects and in the x axis the time in hour. In excel I have an option to plot error bars, using standard deviation or standard error. I'd like to know what's the difference between them and if standard error are enough to show that the data of my two samples are significant? Even after reading some definition on the internet, it's still very confusing to me, being a newbie on statistics.

This is my graph and by plotting the standard errors, this is what it gives. Probably not enough to judge the significance of my data?

enter image description here

4

1 回答 1

-2

I think you mean you have two sample populations, right? if you have two samples that means you have only 2 'records'. Standard error refers to a "standard devation" that occurs with in a sample population. A standard deviation refers to a complete population. Significance is usually derived with a formula that helps us understand if an 'effect' is from chance alone.

Also, if you really do meant that you only have two samples, it is not enough for statistical research. Most large studies use thousands of samples. However, for "statistical" purposes they taught me in a math class that 36 is generally acceptable, as long as theres no bias and it's for school-work. So.. I hope this helps.

edit: oh, a standard deviation indicated the "spread" of the data.. http://upload.wikimedia.org/wikipedia/commons/thumb/8/8c/Standard_deviation_diagram.svg/350px-Standard_deviation_diagram.svg.png

于 2014-04-07T19:10:17.500 回答