Based on some recent statistics I've been running on our system with trying to build web services, it seems I may have inaccurate understanding of how:
1. OS and/or Apache "builds new jobs" for an Apache HTTP server
2. OS and/or Apache keeps programs in memory and how activation groups help
3. OS and/or Apache scales with a parallel/concurrent processing
4. OS and/or Apache jobs utilize system resources to run "efficiently" and "play nice" with other jobs running on the system
I have a simple RPG program that does not open any files, leaves on LR, runs in a named activation group. This program grabs a start time, end time, inserts a row into a table with those values, and leaves. The apache config is set to prestart 50 jobs, and when each job starts it calls this simple program (with the assumption that it will be resident in memory and ready-to-go). When I call this service in a queue (1 call, 10 times, in a queue), the one http job that processes this requests processes each one within 200 ms. When I call this service in parallel, the single request execution time starts to scale upwards. When i go from 1x10 requests to 5x10 requests to 10x10 requests up to 50x10, each job starts to slow down well beyond 5 seconds per request. This is the opposite of what i thought the iseries does for you when it comes to job/resource management. I would expect each request to slow down, but not by 5 seconds.
I do realize there are a lot of factors to take into consideration, and I am hoping to learn more about what all of those factors are. One thing that comes to mind is configuring this specific http server to run in it's own *POOL so that it does not interfere with all of the other resources. Before I take this approach, I wanted to make sure I wasn't overlooking something.
1. OS and/or Apache "builds new jobs" for an Apache HTTP server
2. OS and/or Apache keeps programs in memory and how activation groups help
3. OS and/or Apache scales with a parallel/concurrent processing
4. OS and/or Apache jobs utilize system resources to run "efficiently" and "play nice" with other jobs running on the system
I have a simple RPG program that does not open any files, leaves on LR, runs in a named activation group. This program grabs a start time, end time, inserts a row into a table with those values, and leaves. The apache config is set to prestart 50 jobs, and when each job starts it calls this simple program (with the assumption that it will be resident in memory and ready-to-go). When I call this service in a queue (1 call, 10 times, in a queue), the one http job that processes this requests processes each one within 200 ms. When I call this service in parallel, the single request execution time starts to scale upwards. When i go from 1x10 requests to 5x10 requests to 10x10 requests up to 50x10, each job starts to slow down well beyond 5 seconds per request. This is the opposite of what i thought the iseries does for you when it comes to job/resource management. I would expect each request to slow down, but not by 5 seconds.
I do realize there are a lot of factors to take into consideration, and I am hoping to learn more about what all of those factors are. One thing that comes to mind is configuring this specific http server to run in it's own *POOL so that it does not interfere with all of the other resources. Before I take this approach, I wanted to make sure I wasn't overlooking something.
Comment