Anonymous | Login | Signup for a new account | 03-31-2025 18:18 PDT |
Main | My View | View Issues | Change Log | Docs |
Viewing Issue Advanced Details [ Jump to Notes ] | [ View Simple ] [ Issue History ] [ Print ] | ||||||||
ID | Category | Severity | Reproducibility | Date Submitted | Last Update | ||||
0004725 | [Resin] | minor | always | 08-24-11 10:08 | 08-26-11 11:24 | ||||
Reporter | ferg | View Status | public | ||||||
Assigned To | ferg | ||||||||
Priority | urgent | Resolution | fixed | Platform | |||||
Status | closed | OS | |||||||
Projection | none | OS Version | |||||||
ETA | none | Fixed in Version | 4.0.22 | Product Version | 3.1.6 | ||||
Product Build | |||||||||
Summary | 0004725: slow post limits | ||||||||
Description |
(rep by Santosh Rau) We saw a lot of slow clients dribbling in requests that day. So even if we use JNI libraries for the sockets, there is still a chance that we could exhaust the 200 threads we had. Is there any way we can protect our service from such clients other than keep increasing max threads? Would it be possible to break a connection if the request duration has taken over a preconfigured 'n' seconds? And finally, is it possible for you guys to add this to the 3.1.6 code? |
||||||||
Steps To Reproduce | |||||||||
Additional Information | |||||||||
Attached Files | |||||||||
|
Mantis 1.0.0rc3[^]
Copyright © 2000 - 2005 Mantis Group
28 total queries executed. 25 unique queries executed. |