First, you don't need to guess. You can measure.
- Use the developer tools that are available for your browser (Firebug for Firefox, Developer Tools for Chrome, etc.) and watch the requests and see how long each one takes.
- Observe the load on your server. If it is anything other than 0%, you're never going to see 1000 sessions running simultaneously.
- Repeat the steps above, but with a bunch of browsers open. It should be easy to get 20 browser windows open and running.
- Remember that, as the server load gets past 50% your performance will become increasingly non-linear. Once your system is blocking/thrashing a lot you can't expect to gainfully add any more clients.
Once you have a baseline measurement, you can think about optimization -- do you need it and how big a problem do you have on your hands? Some of the usual solutions:
- If at all possible, serve a static file or a piece of data from your APC cache.
- If your data is cacheable but you have multiple web servers, consider a solution like memcached, MongoDB, or some other centralized very-fast key-based retrieval system.
- If the data is dynamically retrieved from a database, consider using persistent database connections.
- If the CPU load is high per request, you probably have something expensive in your code path. Try to optimize the code path for that particular request, even if you have to hand-craft a special controller for it and bypass your usual framework.
- If the network latency is high per request, use HTTP headers to try and convince the client and server to keep the connection open.
Finally, if the performance you want seems very out of reach, you'll need to consider an architectural change. Using WebSockets would likely be a very different code path, but could conceivably result in far better performance. I'd have to know more about what you're doing with