cwhsu cwhsu - 2 months ago 25
Linux Question

php curl localhost is slow when making concurrent requests

I find an interesting issue which I am not sure about the root cause. I have a server and two virtual hosts A and B with port on 80 and 81 respectively. I write a simple php code on A which looks like this

<?php

echo "from A server\n";


And another simple php code on B server

<?php

echo "B server:\n";

// create curl resource
$ch = curl_init();

// set url
curl_setopt($ch, CURLOPT_URL, "localhost:81/a.php");

//return the transfer as a string
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);

// $output contains the output string
$output = curl_exec($ch);

// close curl resource to free up system resources
curl_close($ch);

echo $output;


When making concurrent requests using ab, I get the following results:

ab -n 10 -c 5 http://192.168.10.173/b.php
This is ApacheBench, Version 2.3 <$Revision: 1706008 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 192.168.10.173 (be patient).....done


Server Software: nginx/1.10.0
Server Hostname: 192.168.10.173
Server Port: 80

Document Path: /b.php
Document Length: 26 bytes

Concurrency Level: 5
Time taken for tests: 2.680 seconds
Complete requests: 10
Failed requests: 0
Total transferred: 1720 bytes
HTML transferred: 260 bytes
Requests per second: 3.73 [#/sec] (mean)
Time per request: 1340.197 [ms] (mean)
Time per request: 268.039 [ms] (mean, across all concurrent requests)
Transfer rate: 0.63 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.1 0 1
Processing: 2 1339 1408.8 2676 2676
Waiting: 2 1339 1408.6 2676 2676
Total: 3 1340 1408.8 2676 2677

Percentage of the requests served within a certain time (ms)
50% 2676
66% 2676
75% 2676
80% 2676
90% 2677
95% 2677
98% 2677
99% 2677
100% 2677 (longest request)


But making 1000 requests with concurrency level 1 is extremely fast:

$ ab -n 1000 -c 1 http://192.168.10.173/b.php
This is ApacheBench, Version 2.3 <$Revision: 1706008 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 192.168.10.173 (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests


Server Software: nginx/1.10.0
Server Hostname: 192.168.10.173
Server Port: 80

Document Path: /b.php
Document Length: 26 bytes

Concurrency Level: 1
Time taken for tests: 1.659 seconds
Complete requests: 1000
Failed requests: 0
Total transferred: 172000 bytes
HTML transferred: 26000 bytes
Requests per second: 602.86 [#/sec] (mean)
Time per request: 1.659 [ms] (mean)
Time per request: 1.659 [ms] (mean, across all concurrent requests)
Transfer rate: 101.26 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.1 0 1
Processing: 1 1 10.3 1 201
Waiting: 1 1 10.3 1 201
Total: 1 2 10.3 1 201

Percentage of the requests served within a certain time (ms)
50% 1
66% 1
75% 1
80% 1
90% 1
95% 1
98% 1
99% 2
100% 201 (longest request)


Can anyone explain why this happened? I really want to know the root cause. Is it an issue of curl? It doesn't feel like an network bottleneck or open file issue since the concurrency is just 5. By the way, I also try the same thing with guzzlehttp, but the result is the same. I use ab on my laptop and the server is in the same local network. Plus, it certainly has nothing to do with network bandwidth because requests between host A and B are done at localhost.




I modify the code, so we can test it more flexible.

<?php

require 'vendor/autoload.php';

use GuzzleHttp\Client;

$opt = 1;
$url = 'http://localhost:81/a.php';

switch ($opt) {
case 1:
// create curl resource
$ch = curl_init();

// set url
curl_setopt($ch, CURLOPT_URL, $url);

//return the transfer as a string
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);

// $output contains the output string
$output = curl_exec($ch);

curl_close($ch);

echo $output;
break;
case 2:
$client = new Client();
$response = $client->request('GET', $url);
echo $response->getBody();
break;
case 3:
echo file_get_contents($url);
break;
default:
echo "no opt";
}

echo "app server:\n";


I try the file_get_contents, but there is no obvious differences when switching to file_get_contents. When the concurrency is 1, all methods are good. But they all start downgrading when concurrency increases.




I think I find something related to this issue so I just post another question concurrent curl could not resolve host. This might be the root cause but I don't have any answer yet.




After trying for so long, I think this is definitely related to name resolving. And here is the php script that can execute at concurrent level 500

<?php

require 'vendor/autoload.php';

use GuzzleHttp\Client;

$opt = 1;
$url = 'http://localhost:81/a.php';

switch ($opt) {
case 1:
// create curl resource
$ch = curl_init();

// set url
curl_setopt($ch, CURLOPT_URL, $url);

//return the transfer as a string
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);

curl_setopt($ch, CURLOPT_PROXY, 'localhost');

// $output contains the output string
$output = curl_exec($ch);

curl_close($ch);

echo $output;
break;
case 2:
$client = new Client();
$response = $client->request('GET', $url, ['proxy' => 'localhost']);
echo $response->getBody();
break;
case 3:
echo file_get_contents($url);
break;
default:
echo "no opt";
}

echo "app server:\n";


What really matters are
curl_setopt($ch, CURLOPT_PROXY, 'localhost');
and
$response = $client->request('GET', $url, ['proxy' => 'localhost']);
. It tells curl to use localhost as proxy.

And here is the result of ab test

ab -n 1000 -c 500 http://192.168.10.173/b.php
This is ApacheBench, Version 2.3 <$Revision: 1528965 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 192.168.10.173 (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests


Server Software: nginx/1.10.0
Server Hostname: 192.168.10.173
Server Port: 80

Document Path: /b.php
Document Length: 182 bytes

Concurrency Level: 500
Time taken for tests: 0.251 seconds
Complete requests: 1000
Failed requests: 184
(Connect: 0, Receive: 0, Length: 184, Exceptions: 0)
Non-2xx responses: 816
Total transferred: 308960 bytes
HTML transferred: 150720 bytes
Requests per second: 3985.59 [#/sec] (mean)
Time per request: 125.452 [ms] (mean)
Time per request: 0.251 [ms] (mean, across all concurrent requests)
Transfer rate: 1202.53 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 6 4.9 5 14
Processing: 9 38 42.8 22 212
Waiting: 8 38 42.9 22 212
Total: 11 44 44.4 31 214

Percentage of the requests served within a certain time (ms)
50% 31
66% 37
75% 37
80% 38
90% 122
95% 135
98% 207
99% 211
100% 214 (longest request)


But still why name resolving failed at concurrency level 5 when not using localhost as proxy?




The virtual host setting is very simple and clean, and almost everything is in default configuration. I do not use iptables on this server, neither do I config anything special.

server {
listen 81 default_server;
listen [::]:81 default_server;

root /var/www/html;

index index.html index.htm index.nginx-debian.html;

server_name _;

location / {
try_files $uri $uri/ =404;
}

location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/run/php/php7.0-fpm.sock;
}
}





Find something interesting! If you do another ab test right after the first one in about 3 seconds. The second ab test pretty quick.

Without using localhost as proxy

ab -n 10 -c 5 http://192.168.10.173/b.php <-- This takes 2.8 seconds to finish.
ab -n 10 -c 5 http://192.168.10.173/b.php <-- This takes 0.008 seconds only.


Using localhost as proxy

ab -n 10 -c 5 http://192.168.10.173/b.php <-- This takes 0.006 seconds.
ab -n 10 -c 5 http://192.168.10.173/b.php <-- This takes 0.006 seconds.


I think it still means that the issue is name resolving. But Why?




Assumption: nginx is not listening to localhost:81

I tried adding
listen 127.0.0.1:81;
to nginx, and it shows no effect.

Find myself making some mistakes about using curl proxy, that does not work! Update other details later.

Answer

Ok, after so many days of trying to solve this issue, I finally find out why. And It's not name resolving. I can't believe that it takes so many days to track down the root cause which is the number of pm.start_servers in php-fpm's www.conf. Initially, I set the number of pm.start_servers to 3 which is why the ab test to localhost always gets worse after concurrency level 3. While php-cli has no issue of limited number of php process, thus, php-cli always performs great. After increaing pm.start_servers to 5, the ab test result is as fast as php-cli. If this is the reason why your php-fpm is slow, you should also think about changing the number of pm.min_spare_servers and pm.max_spare_servers.

Comments