I am not even sure where to start as to why using that many threads in Apache is a bad idea. I would suggest you as a start go watch my PyCon talks:
The short answer is that if you have a real need to handle large numbers of truly concurrent long running requests on a single server, you probably should not be using Apache. You should be using an event based (async) system for those specific requests that have those non standard requirements. In other words, you don't need to switch your whole application to the async model, instead vertically partition your application and subset out the URLs that have the requirement which is different to the rest of your application. That way you can tailor the hosting to the requirements of each and not force your whole application to run under the constraints imposed by a small part of your application.
Now in reality though, most of the time when people think they need to be able to handle such insane number of concurrent requests on one server, they don't. For requests with a short response time, to handle 10000 requests per second, you do not need 10000 threads. This is because in each 1 second time slot, you can handle more than 1 request.
The one thing that can screw this up though is slow clients and keep alive. This is the killer for Apache. So, go stick nginx in front of it as a proxy and turn off keep alive in Apache, but leave keep alive on in nginx. Using nginx as a proxy will isolate Apache from slow clients and allow it to perform better with less resources. This is because a request is only handed off to Apache when it would generally have all the information in the request so as to allow it to handle the request immediately. Thus is isn't tied up and wasting resources waiting on a slow client.
If you do have that requirement for very long running requests (long polling), for a subset of requests, then have nginx proxy just those URLs to a separate async based server. That way you don't have to deal with the pain of using async systems in the rest of your otherwise normal web application.
This all said, also remember that the web server is not usually going to be your bottleneck. Who cares if the server can handle 10000+ requests per second if your actual web application stack, including database, can only handle 10 requests per second. That is going to be your real problem and if you don't improve your web application performance, tweaking the web server is going to make no difference at all. The only solution would be to horizontally scale and have more than one host and load balance across them.
In order to find the real bottlenecks you are going to need performance monitoring on your real world application, with real traffic from real users. You can see more about performance monitoring in my talks.