I have some code that runs several rest queries over a connection that is ssh forwarded to an AWS machine (fyi: these queries are hitting a Solr server running on this machine), and the queries run against my localhost (which is forwarded to the AWS instance).
The code initially runs great getting data as necessary, but after running for a while, the code stalls (in eclipse).
At this exact moment, the terminal (i.e. where I have started my ssh tunnel) goes completely frozen, filling up with the String :
"accept : too many open files"
Because this infinite print is not associated with a bash terminal (i.e. I can't tell wether the ssh connection is still alive or not, and there is no text indicating which shell im in... just unpridled, relentless print statements) I cant tell wether it's coming from amazon, or from my client terminal.
I want to find the cause of this behavior and pinpoint the machine which is causing my terminal to explode
To test which of the two machines was causing the infinite print outs of the error, I ran the ulimit command on the server... and found that the max number of open files allowed (on the aws server) was well above the amount of open files (also determined using ulimit) at any given time while the client program (running from my ide) is executing.
I did the same test on my client , and found no significant increase in the number of open files.
Some side details : I'm running several hundreds of queries into a SOLR server that has over 100GB of data in a short time period.
Any hints on how to determine why my sshd mac os x terminal is dying and infinitely printing this message would be potentially very useful to me. Of course, wether or not they were specific to solr. That said, any insights into why this would happen when using a solr service may also help to solve this problem.