TL;DR - should a simple cache cluster for session storage (using memcache or redis) live on the app's servers (i.e. along with nginx and php) or on its own separate ec2 instance (like elasticache or a customized ec2 instance)?
I'm in the process of using Amazon OpsWorks to set up my web app's infrastructure. I am leaning toward implementing the session cache through memcache instances installed on the app layer itself rather than as its own ec2 instance. For instance:
[ Load Balancer ]
/ | \
[ App Layer 1 ] – [ App Layer 2 ] – [ App Layer 3 ] * /w memcache or redis
vs.
[ Load Balancer ]
/ | \
[ App Layer 1 ] [ App Layer 2 ] [ App Layer 3 ]
\ | /
[ Cache Server(s) ] * ElastiCache or custom ec2 /w memcache or redis
What are the pros and cons here? To me the later solution seems unnecessary, though I can see how a high-traffic website with a really large session cache might need this.
Is there a reason I may not want to run redis or elasticache alongside my nginx/php app server stack? Does it make auto-scaling or monitoring performance more difficult to do perhaps?