I have 2 samples. For each i calculate a number of objects, according to the time. I plot in the y axis the number of objects and in the x axis the time in hour. In excel I have an option to plot error bars, using standard deviation or standard error. I'd like to know what's the difference between them and if standard error are enough to show that the data of my two samples are significant? Even after reading some definition on the internet, it's still very confusing to me, being a newbie on statistics.
This is my graph and by plotting the standard errors, this is what it gives. Probably not enough to judge the significance of my data?