Nginx Forum - Nginx Mailing List - English A portal to and from the mailing list. http://www.ldmicj.icu/list.php?2 Wed, 18 Nov 2020 04:26:07 -0500 Phorum 5.2.16 http://www.ldmicj.icu/read.php?2,289999,289999#msg-289999 nginx-quic http3 reverse proxy problem (no replies) http://www.ldmicj.icu/read.php?2,289999,289999#msg-289999 lujiangbin Nginx Mailing List - English Wed, 18 Nov 2020 01:37:29 -0500 http://www.ldmicj.icu/read.php?2,289987,289987#msg-289987 Some Questions about NGINX and F5 (2 replies) http://www.ldmicj.icu/read.php?2,289987,289987#msg-289987 2) What happened with Heylu? I am not even sure what “Heylu” was supposed to be. Any clarity on this?
3) I do not understand so well how NGINX and Docker work together – are they complementary? I read a lot of material and saw videos, but I think I am missing something that a quick discussion would help me get it. I think my lack of understanding is also with Kubernetes vs Docker swarm topic.
I know there is something really exciting and big with NGINX and its combination with F5 but I need some color to move my imagination into the reality

Lee Haddad]]>
leeahaddad Nginx Mailing List - English Tue, 17 Nov 2020 05:50:02 -0500
http://www.ldmicj.icu/read.php?2,289977,289977#msg-289977 Performance of Nginx as reverse proxy for Hasura - 7-50x slow (5 replies) http://www.ldmicj.icu/read.php?2,289977,289977#msg-289977
I have not faced noticed any performance issues with other REST APIs. Is it a known issue (perhaps due to SSL handshake)? If so, are there any known solutions for the same? TIA_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx]]>
R. Rajesh Jeba Anbiah Nginx Mailing List - English Wed, 18 Nov 2020 00:44:04 -0500
http://www.ldmicj.icu/read.php?2,289976,289976#msg-289976 499 response code for NGINX Proxy App in Cloud Foundry (no replies) http://www.ldmicj.icu/read.php?2,289976,289976#msg-289976
I'm using nginx as the proxy application deployed in the cloud foundry space which will route the requests to the upstream servers which are behind the Elastic Load Balancer.
I noticed that after 3-4 hours, the proxy application in the cloud foundry space is going down and it's logging 499 response code in the access logs.
If we restart the cf app , then everything works fine for sometime and again goes back to the error state. Can you please suggest what might be the possible reason for this?

Thanks,
praveen]]>
praveen Nginx Mailing List - English Fri, 13 Nov 2020 02:48:18 -0500
http://www.ldmicj.icu/read.php?2,289975,289975#msg-289975 Hide HTTP headers in nginx (2 replies) http://www.ldmicj.icu/read.php?2,289975,289975#msg-289975
As part of the security audit, I have set server_tokens off;
in /etc/nginx/nginx.conf. Is there a way to hide Server: nginx,
X-Powered-By and X-Generator?

To hide the below HTTP headers

Server: nginx
> X-Powered-By: PHP/7.2.34
> X-Generator: Drupal 8 (https://www.drupal.org)


curl -i -H Host:_ https://mydomain.com

HTTP/1.1 200 OK
*Server: nginx*
Content-Type: text/html; charset=UTF-8
Transfer-Encoding: chunked
Connection: keep-alive
*X-Powered-By: PHP/7.2.34*
Cache-Control: max-age=21600, public
Date: Fri, 13 Nov 2020 00:23:38 GMT
X-Drupal-Dynamic-Cache: MISS
Link: https://_/; rel="shortlink", https://_/; rel="canonical"
X-UA-Compatible: IE=edge
Content-language: en
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
Expires: Sun, 19 Nov 1978 05:00:00 GMT
Last-Modified: Fri, 13 Nov 2020 00:23:37 GMT
ETag: "1605227017"
Vary: Cookie
*X-Generator: Drupal 8 (https://www.drupal.org https://www.drupal.org)*
X-XSS-Protection: 1; mode=block
X-Drupal-Cache: HIT

Best Regards,

Kaushal
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx]]>
kaushalshriyan Nginx Mailing List - English Fri, 13 Nov 2020 06:18:12 -0500
http://www.ldmicj.icu/read.php?2,289973,289973#msg-289973 [ANN] OpenResty 1.19.3.1 released (no replies) http://www.ldmicj.icu/read.php?2,289973,289973#msg-289973
I am happy to announce the new formal release, 1.19.3.1, of our
OpenResty web platform based on NGINX and LuaJIT.

The full announcement, download links, and change logs can be found below:

https://openresty.org/en/ann-1019003001.html

OpenResty is a high performance and dynamic web platform based on our
enhanced version of Nginx core, our enhanced version of LuaJIT, and
many powerful Nginx modules and Lua libraries. See OpenResty's
homepage for details:

https://openresty.org/en/

Enjoy!

Best,
Yichun

--
Yichun Zhang
Founder and CEO of OpenResty Inc.
https://openresty.com/
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx]]>
Yichun Zhang Nginx Mailing List - English Thu, 12 Nov 2020 18:40:07 -0500
http://www.ldmicj.icu/read.php?2,289967,289967#msg-289967 Forbid web.config page from the browser as in https://mydomain.com/web.config (2 replies) http://www.ldmicj.icu/read.php?2,289967,289967#msg-289967
I am running the Nginx version: nginx/1.16.1 on CentOS Linux release
7.8.2003 (Core). I am trying to forbid/prevent web.config file to
download it from the browser. When I hit
https://mydomain.com/web.config it is allowing me to download instead of
forbidding the page ( 403 Forbidden). I am sharing the below nginx.conf
file for your reference.

server {
> server_name _;
> root /var/www/html/apcv3/docroot; ## <-- Your only path reference.
> location /dacv3 {
> alias /var/www/html/apcv3/docroot;
> index index.php;
> location ~ \.php$ {
> include fastcgi_params;
> # Block httpoxy attacks. See https://httpoxy.org/.
> fastcgi_param HTTP_PROXY "";
> fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
> fastcgi_param PATH_INFO $fastcgi_path_info;
> fastcgi_param QUERY_STRING $query_string;
> fastcgi_intercept_errors on;
> fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock;
> }
> }
> location = /favicon.ico {
> log_not_found off;
> access_log off;
> }
> location = /robots.txt {
> allow all;
> log_not_found off;
> access_log off;
> }
> # Very rarely should these ever be accessed outside of your lan
> location ~* \.(txt|log)$ {
> allow 192.168.0.0/16;
> deny all;
> }
> location ~ \..*/.*\.php$ {
> return 403;
> }
> location ~ ^/sites/.*/private/ {
> return 403;
> }
> # Block access to scripts in site files directory
> location ~ ^/sites/[^/]+/files/.*\.php$ {
> deny all;
> }
> # Allow "Well-Known URIs" as per RFC 5785
> location ~* ^/.well-known/ {
> allow all;
> }
> # Block access to "hidden" files and directories whose names begin
> with a
> # period. This includes directories used by version control systems
> such
> # as Subversion or Git to store control files.
> location ~ (^|/)\. {
> return 403;
> }
> location / {
> # try_files $uri @rewrite; # For Drupal <= 6
> try_files $uri /index.php?$query_string; # For Drupal >= 7
> }
> location @rewrite {
> rewrite ^/(.*)$ /index.php?q=$1;
> }
> # Don't allow direct access to PHP files in the vendor directory.
> location ~ /vendor/.*\.php$ {
> deny all;
> return 404;
> }
> # Protect files and directories from prying eyes.
> location ~*
> \.(engine|inc|install|make|module|profile|po|sh|.*sql|theme|twig|tpl(\.php)?|xtmpl|yml)(~|\.sw[op]|\.bak|\.orig|\.save)?$|^(\.(?!well-known).*|Entries.*|Repository|Root|Tag|Template|composer\.(json|lock)|web\.config)$|^#.*#$|\.php(~|\.sw[op]|\.bak|\.orig|\.save)$
> {
> deny all;
> return 404;
> }
> location ^~ /web.config {
> deny all;
> }
> # In Drupal 8, we must also match new paths where the '.php' appears in
> # the middle, such as update.php/selection. The rule we use is strict,
> # and only allows this pattern with the update.php front controller.
> # This allows legacy path aliases in the form of
> # blog/index.php/legacy-path to continue to route to Drupal nodes. If
> # you do not have any paths like that, then you might prefer to use a
> # laxer rule, such as:
> # location ~ \.php(/|$) {
> # The laxer rule will continue to work if Drupal uses this new URL
> # pattern with front controllers other than update.php in a future
> # release.
> location ~ '\.php$|^/update.php' {
> fastcgi_split_path_info ^(.+?\.php)(|/.*)$;
> # Security note: If you're running a version of PHP older than the
> # latest 5.3, you should have "cgi.fix_pathinfo = 0;" in php.ini.
> # See http://serverfault.com/q/627903/94922 for details.
> include fastcgi_params;
> # Block httpoxy attacks. See https://httpoxy.org/.
> fastcgi_param HTTP_PROXY "";
> fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
> fastcgi_param PATH_INFO $fastcgi_path_info;
> fastcgi_param QUERY_STRING $query_string;
> fastcgi_intercept_errors on;
> # PHP 5 socket location.
> #fastcgi_pass unix:/var/run/php5-fpm.sock;
> # PHP 7 socket location.
> fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock;
> }
> # Fighting with Styles? This little gem is amazing.
> # location ~ ^/sites/.*/files/imagecache/ { # For Drupal <= 6
> location ~ ^/sites/.*/files/styles/ { # For Drupal >= 7
> try_files $uri @rewrite;
> }
> # Handle private files through Drupal. Private file's path can come
> # with a language prefix.
> location ~ ^(/[a-z\-]+)?/system/files/ { # For Drupal >= 7
> try_files $uri /index.php?$query_string;
> }
> location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg)$ {
> try_files $uri @rewrite;
> expires max;
> log_not_found off;
> }
> }


Please let me know if I am missing anything in the Nginx config file.
Thanks in advance and I look forward to hearing from you.

Best Regards,

Kaushal
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx]]>
kaushalshriyan Nginx Mailing List - English Thu, 12 Nov 2020 19:24:03 -0500
http://www.ldmicj.icu/read.php?2,289945,289945#msg-289945 grpc-go client disconnect in 60 seconds (3 replies) http://www.ldmicj.icu/read.php?2,289945,289945#msg-289945 I am using grpc-go client for grpc subscription, and in every 60 seconds after subscription , it is disconnecting.
I run nginx in debug mode, and found following logs where i see that "408" error happened , because "client_body_timeout is default value 60 seconds."
i am using nginx 1.16.1

2020/11/10 10:57:05 [debug] 2670#0: *1 event timer del: 3: 12588176
2020/11/10 10:57:05 [debug] 2670#0: *1 http run request: "/gnmi.gNMI/Subscribe?"
2020/11/10 10:57:05 [debug] 2670#0: *1 http upstream read request handler
2020/11/10 10:57:05 [debug] 2670#0: *1 finalize http upstream request: 408
2020/11/10 10:57:05 [debug] 2670#0: *1 finalize grpc request
2020/11/10 10:57:05 [debug] 2670#0: *1 free rr peer 1 0
2020/11/10 10:57:05 [debug] 2670#0: *1 close http upstream connection: 25
2020/11/10 10:57:05 [debug] 2670#0: *1 run cleanup: 097FE508
2020/11/10 10:57:05 [debug] 2670#0: *1 free: 097FE4E0, unused: 60
2020/11/10 10:57:05 [debug] 2670#0: *1 event timer del: 25: 16184227
2020/11/10 10:57:05 [debug] 2670#0: *1 reusable connection: 0
2020/11/10 10:57:05 [debug] 2670#0: *1 http finalize request: 408, "/gnmi.gNMI/Subscribe?" a:1, c:1
2020/11/10 10:57:05 [debug] 2670#0: *1 http terminate request count:1
2020/11/10 10:57:05 [debug] 2670#0: *1 http terminate cleanup count:1 blk:0
2020/11/10 10:57:05 [debug] 2670#0: *1 http posted request: "/gnmi.gNMI/Subscribe?"
2020/11/10 10:57:05 [debug] 2670#0: *1 http terminate handler count:1
2020/11/10 10:57:05 [debug] 2670#0: *1 http request count:1 blk:0
2020/11/10 10:57:05 [debug] 2670#0: *1 http2 close stream 1, queued 0, processing 1, pushing 0
2020/11/10 10:57:05 [debug] 2670#0: *1 http2 send RST_STREAM frame sid:1, status:1
2020/11/10 10:57:05 [debug] 2670#0: *1 http close request
2020/11/10 10:57:05 [debug] 2670#0: *1 http log handler
2020/11/10 10:57:05 [debug] 2670#0: *1 run cleanup: 09852B5C
2020/11/10 10:57:05 [debug] 2670#0: *1 free: 09853E38
2020/11/10 10:57:05 [debug] 2670#0: *1 free: 098A3518
2020/11/10 10:57:05 [debug] 2670#0: *1 free: 09851E00, unused: 0
2020/11/10 10:57:05 [debug] 2670#0: *1 free: 09852E20, unused: 2045
2020/11/10 10:57:05 [debug] 2670#0: *1 free: 097FDF40, unused: 847
2020/11/10 10:57:05 [debug] 2670#0: *1 post event 097FA190
2020/11/10 10:57:05 [debug] 2670#0: *1 delete posted event 097FA190
2020/11/10 10:57:05 [debug] 2670#0: *1 http2 handle connection handler
2020/11/10 10:57:05 [debug] 2670#0: *1 http2 frame out: 09831A48 sid:0 bl:0 len:4
2020/11/10 10:57:05 [debug] 2670#0: *1 SSL buf copy: 13
2020/11/10 10:57:05 [debug] 2670#0: *1 SSL to write: 13
2020/11/10 10:57:05 [debug] 2670#0: *1 SSL_write: 13
2020/11/10 10:57:05 [debug] 2670#0: *1 http2 frame sent: 09831A48 sid:0 bl:0 len:4
2020/11/10 10:57:05 [debug] 2670#0: *1 free: 09831A20, unused: 3672
2020/11/10 10:57:05 [debug] 2670#0: *1 free: 098B3520
2020/11/10 10:57:05 [debug] 2670#0: *1 reusable connection: 1
2020/11/10 10:57:05 [debug] 2670#0: *1 event timer add: 3: 180000:12768251
2020/11/10 10:57:06 [debug] 2670#0: *1 http2 idle handler
2020/11/10 10:57:06 [debug] 2670#0: *1 reusable connection: 0
2020/11/10 10:57:06 [debug] 2670#0: *1 posix_memalign: 09831A20:4096 @16
2020/11/10 10:57:06 [debug] 2670#0: *1 http2 read handler
2020/11/10 10:57:06 [debug] 2670#0: *1 SSL_read: -1
2020/11/10 10:57:06 [debug] 2670#0: *1 SSL_get_error: 5
2020/11/10 10:57:06 [debug] 2670#0: *1 peer shutdown SSL cleanly
2020/11/10 10:57:06 [debug] 2670#0: *1 close http connection: 3
2020/11/10 10:57:06 [debug] 2670#0: *1 SSL_shutdown: 1
2020/11/10 10:57:06 [debug] 2670#0: *1 event timer del: 3: 12768251
2020/11/10 10:57:06 [debug] 2670#0: *1 reusable connection: 0
2020/11/10 10:57:06 [debug] 2670#0: *1 run cleanup: 097F5670
2020/11/10 10:57:06 [debug] 2670#0: *1 free: 09831A20, unused: 4056
2020/11/10 10:57:06 [debug] 2670#0: *1 free: 00000000
2020/11/10 10:57:06 [debug] 2670#0: *1 free: 097FCEF8
2020/11/10 10:57:06 [debug] 2670#0: *1 free: 097FE8B0
2020/11/10 10:57:06 [debug] 2670#0: *1 free: 098458E8
2020/11/10 10:57:06 [debug] 2670#0: *1 free: 097F55A0, unused: 4
2020/11/10 10:57:06 [debug] 2670#0: *1 free: 09845CE0, unused: 4
2020/11/10 10:57:06 [debug] 2670#0: *1 free: 09830E40, unused: 136



Following is my nginx conf file



http {
include mime.types;
default_type application/octet-stream;

#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';

#access_log /tmp/access.log main;
access_log /tmp/access.log;

sendfile on;
#tcp_nopush on;

keepalive_timeout 180;
proxy_read_timeout 120s;
proxy_send_timeout 120s;
client_body_timeout 360s;
limit_conn_zone localhost zone=servers:10m;

map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}

I tried increasing this client_body_timeout directive to 360 seconds, i see that grpc-go client disconnect in 360 seconds.
Same issue was not observed when i run grpc-java client or grpc-python client .

Can you please help me to know why this issue may happen ?

Thanks]]>
aagrawal Nginx Mailing List - English Tue, 10 Nov 2020 09:32:06 -0500
http://www.ldmicj.icu/read.php?2,289942,289942#msg-289942 How to encrypt request body content by nginx plugin (no replies) http://www.ldmicj.icu/read.php?2,289942,289942#msg-289942
I learned that it can be encrypt request body content and encrypt response body content through encrypted-session-nginx-module(https://github.com/openresty/encrypted-session-nginx-module) in openresty.

But I need to use the nginx plugin, not openresty(Old system, it is not allowed to replace ngnix).

I refer to nginx-http-concat (https://github.com/alibaba/nginx-http-concat),this can be solve the encrypt response body content.

How to encrypt request body content by nginx plugin?Which stage of HTTP can be processed?Is it NGX_HTTP_CONTENT_PHASE?Are there any examples of plugins?

Thanks!
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx]]>
lilihongbeast@163.com Nginx Mailing List - English Tue, 10 Nov 2020 04:06:04 -0500
http://www.ldmicj.icu/read.php?2,289939,289939#msg-289939 How to detect rest of c->recv (no replies) http://www.ldmicj.icu/read.php?2,289939,289939#msg-289939
I have been working my way around the code base and developing some
modules purely for fun[0].

I am now building a core module that acts as an echo server[1]. This is
to learn how to actually work with the event loop and clients, etc.

One question I have is: How do I know if there is more data to be
recv()'d? Here is an example code taken from [1].

size = 10;

b = c->buffer;

if (b == NULL) {
b = ngx_create_temp_buf(c->pool, size);
if (b == NULL) {
ngx_echo_close_connection(c);
return;
}

c->buffer = b;
}

n = c->recv(c, b->last, size);

if (n == NGX_AGAIN) {

In this case, I'm using a small buffer for testing purpose. What is the
idiomatic way of knowing if there is more data coming in?

Also, what is the proper way of "waiting" again on the next received
data? I tried simply returning, but the read handler does not run again
(until the client inputs more data.)

Here's the scenario I am describing:

1. client sends "Hello, world!", which is 13 bytes.
2. read handler reads 10 bytes, stores that into another buffer
(ngx_str_t in this case)
3. here we want to run the read handler again, probably increase the
buffer size
4. echo back data to client

Thanks!

[0]: https://git.sr.ht/~tomleb/nginx-stream-upstream-time-module
[1]: https://git.sr.ht/~tomleb/nginx-echo-module
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx]]>
tomleb Nginx Mailing List - English Mon, 09 Nov 2020 22:10:14 -0500
http://www.ldmicj.icu/read.php?2,289926,289926#msg-289926 Matching of special characters in location (5 replies) http://www.ldmicj.icu/read.php?2,289926,289926#msg-289926
Is there any (sane) way to match things like: %e2%80%8b in URL in location?

Thank you in advance.

--
Grzegorz Kulewski

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx]]>
Grzegorz Kulewski Nginx Mailing List - English Tue, 10 Nov 2020 19:36:03 -0500
http://www.ldmicj.icu/read.php?2,289925,289925#msg-289925 FW: Using it as a proxy server (no replies) http://www.ldmicj.icu/read.php?2,289925,289925#msg-289925 nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx]]>
Sathya Prasad H R Nginx Mailing List - English Mon, 09 Nov 2020 07:40:05 -0500
http://www.ldmicj.icu/read.php?2,289924,289924#msg-289924 Using Nginx as a proxy server (no replies) http://www.ldmicj.icu/read.php?2,289924,289924#msg-289924 nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx]]>
Sathya Prasad H R Nginx Mailing List - English Mon, 09 Nov 2020 07:38:09 -0500
http://www.ldmicj.icu/read.php?2,289917,289917#msg-289917 HTTP/3 and php POST (2 replies) http://www.ldmicj.icu/read.php?2,289917,289917#msg-289917
i have found that https://hg.nginx.org/nginx-quic (current?as of 06 Nov 2020)
is having some trouble properly POSTing back to PayPal using php 7.3.24 on
a Debian Buster box. things work as expected using current mainline nginx
or current quiche.? i have verified that PageSpeed is not causing the problem.

the PayPal IPN php script is one of those things that has been working so
long it has cobwebs on it.? it gets a POST, it adds a key / value to the
payload and POSTs it back.? it is instantaneous and thoughtless.? the PHP
script is getting a 200 return code from the return POST and everything
seems great on my side.? but PayPal is complaining about the return POST
and they cant tell me why.? i have enabled logging on the script and in
PHP and dont see anything out of the ordinary.? the only thing i see is
when i use Postman 7.34.0 i am getting a "Parse Error: Invalid character
in chunk size" error, which i am having no luck tracking down information
on.

i would like to generate some --debug logs for you but wont because of the
sensitive PayPal customer payment information.

any thoughts or suggestions?

nginx -V
nginx version: nginx/1.19.4
built by gcc 6.3.0 20170516 (Debian 6.3.0-18+deb9u1)
built with OpenSSL 1.1.1 (compatible; BoringSSL) (running with BoringSSL)
TLS SNI support enabled
configure arguments: --with-cc-opt=-I../boringssl/include --with-ld-opt='-L../boringssl/build/ssl -L../boringssl/build/crypto' --with-http_v3_module --with-http_quic_module --with-stream_quic_module --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --pid-path=/var/run/nginx.pid --lock-path=/var/lock/nginx.lock --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --user=www-data --group=www-data --without-http_uwsgi_module --without-http_scgi_module --without-http_memcached_module --without-mail_pop3_module --without-mail_imap_module --without-mail_smtp_module --with-http_ssl_module --with-http_v2_module --with-http_stub_status_module --with-http_gzip_static_module --with-http_realip_module --with-file-aio --add-module=../../headers-more-nginx-module --add-module=../../pagespeed --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --with-openssl=../boringssl

as always, thank you for being awesome.
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx]]>
Ryan Gould Nginx Mailing List - English Thu, 12 Nov 2020 10:52:12 -0500
http://www.ldmicj.icu/read.php?2,289905,289905#msg-289905 Nginx Download MP3 206 Partial Content HTTP Response (10 replies) http://www.ldmicj.icu/read.php?2,289905,289905#msg-289905
I am successfully able to browse an MP3 website and play the MP3 streams without issue through Nginx (1.19.2).

However, when attempting to download an MP3 through Nginx, I'm receiving a 206 Partial Content HTTP Response:

192.168.0.154 - - [07/Nov/2020:10:25:22 +0000] "GET music.mp3 HTTP/1.1" 206 1982193 "http://domain.com/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.75 Safari/537.36 Edg/86.0.622.38"

A Client-Side Packet Trace shows the 206 Partial Content HTTP Response with a RST from Nginx:

3390 42.119998 192.168.0.154 192.168.0.2 TCP 54 61978 → 80 [ACK] Seq=526 Ack=2293125 Win=1629440 Len=0
3391 42.120434 192.168.0.2 192.168.0.154 HTTP 347 HTTP/1.1 206 Partial Content (audio/mpeg)
3392 42.120449 192.168.0.154 192.168.0.2 TCP 54 61978 → 80 [ACK] Seq=526 Ack=2293418 Win=1629184 Len=0
4375 69.116574 192.168.0.154 192.168.0.2 TCP 54 [TCP Window Update] 61978 → 80 [ACK] Seq=526 Ack=2293418 Win=4219392 Len=0
4984 87.122995 192.168.0.154 192.168.0.2 TCP 55 [TCP Keep-Alive] 61978 → 80 [ACK] Seq=525 Ack=2293418 Win=4219392 Len=1
4985 87.123324 192.168.0.2 192.168.0.154 TCP 66 [TCP Keep-Alive ACK] 80 → 61978 [ACK] Seq=2293418 Ack=526 Win=6912 Len=0 SLE=525 SRE=526
5761 117.117822 192.168.0.2 192.168.0.154 TCP 60 80 → 61978 [FIN, ACK] Seq=2293418 Ack=526 Win=6912 Len=0
5762 117.117911 192.168.0.154 192.168.0.2 TCP 54 61978 → 80 [ACK] Seq=526 Ack=2293419 Win=4219392 Len=0
7291 162.122574 192.168.0.154 192.168.0.2 TCP 55 [TCP Keep-Alive] 61978 → 80 [ACK] Seq=525 Ack=2293419 Win=4219392 Len=1
7292 162.123048 192.168.0.2 192.168.0.154 TCP 60 [TCP Keep-Alive ACK] 80 → 61978 [ACK] Seq=2293419 Ack=526 Win=6912 Len=0
7591 173.888730 192.168.0.154 192.168.0.2 TCP 54 61978 → 80 [FIN, ACK] Seq=526 Ack=2293419 Win=4219392 Len=0
7594 173.889906 192.168.0.2 192.168.0.154 TCP 60 80 → 61978 [RST] Seq=2293419 Win=0 Len=0

I've tried several different browsers (i.e., Chrome, Edge, etc) with the same issue.

The download is successful when browsing directly and not using Nginx.

Any idea why the MP3 download is failing using Nginx?

Much Appreciated.


Gary]]>
garycnew@yahoo.com Nginx Mailing List - English Mon, 09 Nov 2020 16:44:08 -0500
http://www.ldmicj.icu/read.php?2,289903,289903#msg-289903 nginx modules development with vscode (no replies) http://www.ldmicj.icu/read.php?2,289903,289903#msg-289903 paravz Nginx Mailing List - English Sat, 07 Nov 2020 01:42:36 -0500 http://www.ldmicj.icu/read.php?2,289881,289881#msg-289881 How do I call a subrequest on every request? (1 reply) http://www.ldmicj.icu/read.php?2,289881,289881#msg-289881 check to see if a user has been manually disabled before that expiration
time.

Is there a way to force auth_request to be called every time? Currently if
it is successful it doesn't hit that endpoint again.

-Jonathan
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx]]>
Jonathan Morrison Nginx Mailing List - English Fri, 06 Nov 2020 07:20:10 -0500
http://www.ldmicj.icu/read.php?2,289880,289880#msg-289880 SSL error:14094418:SSL routines:SSL3_READ_BYTES:tlsv1 alert unknown ca:SSL alert (6 replies) http://www.ldmicj.icu/read.php?2,289880,289880#msg-289880

2020/11/05 19:55:21 [error] 6334#6334: *111317 SSL_do_handshake()
failed (SSL: error:14094418:SSL routines:SSL3_READ_BYTES:tlsv1 alert unknown ca:SSL alert n$


Here is the nginx configuration file:

server {
listen 443 ssl;
listen [::]:443 ssl;

ssl_certificate /home/ubuntu/appname.com.pem;
ssl_certificate_key /home/ubuntu/appname.com.key;

server_name appname.com;

ssl_protocols TLSv1.2;

set $target_server targetapp.com:443;

location /api/ {
rewrite ^/api(/.*) $1 break;
proxy_pass https://$target_server/$uri$is_args$args;
proxy_set_header X-Forwarded-Host $server_name;
proxy_set_header Host appname.com;
error_log /var/log/nginx/target_server.log debug;
proxy_set_header Accept-Encoding text/xml;
proxy_ssl_certificate /home/ubuntu/target_server_client.pem;
proxy_ssl_certificate_key /home/ubuntu/target_server_key.pem;
proxy_ssl_trusted_certificate /home/ubuntu/target_server_CA.pem;
proxy_ssl_verify off;
proxy_ssl_verify_depth 1;
proxy_ssl_server_name on;
}
}




I tried to enable/disable both `proxy_ssl_server_name` and `proxy_ssl_verify`, but both didn't fix the issue.

When I SSH into that server and try the below curl command, I can get the expected correct response, it's only when try to hit the endpoint from the browser:


curl -vv --cert target_server_client.pem --key target_server_key.pem --cacert target_server_CA.pem --url https://targetapp.com/api 2>&1|less



I'm not sure what could be the issue, I suspect it would be that the Nginx proxy is using the IP address instead of host name in the endpoint, that's why it's giving an SSL verification issue. Because it's working by curl command propely. I also tried to enable the proxy_ssl_server_name, but didn't help.]]>
meniem Nginx Mailing List - English Mon, 09 Nov 2020 16:20:04 -0500
http://www.ldmicj.icu/read.php?2,289870,289870#msg-289870 upstream problem (no replies) http://www.ldmicj.icu/read.php?2,289870,289870#msg-289870 I'm trying to set up a balance as follows (example):

upstream loadbalance {
server server:9091;
server server:9092;
server server:9093;
}

location / {
proxy_set_header Host $server_name;
proxy_request_buffering off;
proxy_buffering off;
proxy_redirect off;
proxy_read_timeout 30s;
proxy_connect_timeout 75s;
proxy_pass http://loadbalance;
}

When access http:/nginxserver/webservice ....

works perfectly if I use one of the webservices at a time (ex: only
port 9091, 9092 or 9093 at a time), but when I use the 3 together, it
works intermittently ... (sorry, the page you are looking for is
currently unavailable...)
all services are ok too (http://nginxserver:909x/webservice are running)
what am I doing wrong?

--
*Esta mensagem pode conter informa??es confidenciais ou privilegiadas,
sendo seu sigilo protegido por lei. Se você n?o for o destinatário ou a
pessoa autorizada a receber esta mensagem, n?o pode usar, copiar ou
divulgar as informa??es nela contidas ou tomar qualquer a??o baseada nessas
informa??es. Se você recebeu esta mensagem por engano, por favor avise
imediatamente ao remetente, respondendo o e-mail e em seguida apague-o.
Agradecemos sua coopera??o.*
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx]]>
Rejaine Silveira Monteiro Nginx Mailing List - English Tue, 03 Nov 2020 09:04:06 -0500
http://www.ldmicj.icu/read.php?2,289868,289868#msg-289868 Nginx - Hide Proxy Server url + Header (1 reply) http://www.ldmicj.icu/read.php?2,289868,289868#msg-289868
I am very new to th Nginx. Past 2 days, I have been learning the Nginx from
the Open Forum. I am not familiar with most of the term and key words.
Sorry for that.

I need your support on the below case.

I am using "www.ebay.com" as a proxy server. When I am trying to access the
nginx using my public IP from another machine in port 80. I was able to see
the ebay welcome page . But, when i am trying to visit any sub page in
ebay using hyperlink listed in the ebay home page, its not using my nginx
IP instead of it using the ebay hostname.

My expectation is, In browsers it should always show nginx server IP
instead of proxy server hostname.

Below is the code: Your help is really appreciated and relief for my burden.

worker_processes 1;

events {
worker_connections 1024;
}

http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;

server {
listen 80;
server_name localhost;

location / {
proxy_pass https://www.ebay.com;
index index.html index.htm;
} # end location
} # end server
}

--
Thanks,
Sathish Kumar Pannerselvam
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx]]>
Sathishkumar Pannerselvam Nginx Mailing List - English Tue, 03 Nov 2020 09:12:05 -0500
http://www.ldmicj.icu/read.php?2,289867,289867#msg-289867 $request_id version per subrequest (no replies) http://www.ldmicj.icu/read.php?2,289867,289867#msg-289867
Currently $request_id seems to be per main request and any subrequests (like: ssi includes and other cases) reuse same id.

Would it be possible to add a second variable with per subrequest version of $request_id?

Thank you in advance.

--
Grzegorz Kulewski
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx]]>
Grzegorz Kulewski Nginx Mailing List - English Tue, 03 Nov 2020 01:38:13 -0500
http://www.ldmicj.icu/read.php?2,289848,289848#msg-289848 ip_hash and multiple clients on the same host (1 reply) http://www.ldmicj.icu/read.php?2,289848,289848#msg-289848

upstream backend {
ip_hash;
server host1:8000;
server host2:8000;
}

The `ip_hash` directive is what I learned from online docs, but I am just wondering does this mean that all the requests from the same host will be routed to the same upstream server? Since I may have multiple clients running on the same host, I want to still distribute these requests to different upstream servers but if I am using ip_hash does that imply if two clients are on the same host they will be routed to the same server because they share the same ip? There seems to be some more advanced configs to achieve sticky session but seems that would require nginx plus which I dont have... My intention is that even if two clients come from same host they should probably go to different server, but after they are assigned an upstream server they should stick to that one. Thanks!]]>
bobfang_sqp Nginx Mailing List - English Fri, 30 Oct 2020 16:48:01 -0400
http://www.ldmicj.icu/read.php?2,289844,289844#msg-289844 Session ticket renewal regarding RFC 5077 TLS session resumption (1 reply) http://www.ldmicj.icu/read.php?2,289844,289844#msg-289844
I have a question on TLS session resumption with client-side session
tickets and its implementation in nginx.

RFC 5077, section 3.3, paragraph 2 reads:
If the server successfully verifies the client's ticket, then it MAY renew
the ticket by including a NewSessionTicket handshake message after the
ServerHello in the abbreviated handshake. The client should start using the
new ticket as soon as possible ...

Which seems very reasonable to me. That way the session could continue
without the need of a costly full handshake. It could continue virtually
forever, as long as the client resumes the session within the time window
configured by ssl_session_timeout.


However, it appears to me that nginx will not issue a new session ticket
proactively before ssl_session_timeout elapses.
So session resumption works fine within ssl_session_timeout and nginx
initiates a full handshake once the timeout has expired.

Searching the interwebs I found an old trac issue (
https://trac.nginx.org/nginx/ticket/120) including a patch, where it was
reported that clients do not seem to support this kind of behavior.
And then there is ticket 1892 (https://trac.nginx.org/nginx/ticket/1892)
which is about session ticket renewal on TLS 1.3 (in my case it is TLS 1.2)
but says that the setting ssl_session_ticket_key plays a role for this
topic.

So is my expectation and my understanding of RFC 5077 correct? And what is
the current implementation in nginx?


Best regards,
Robert
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx]]>
Robert Naundorf Nginx Mailing List - English Fri, 30 Oct 2020 17:56:06 -0400
http://www.ldmicj.icu/read.php?2,289842,289842#msg-289842 Nginx as a forward proxy (1 reply) http://www.ldmicj.icu/read.php?2,289842,289842#msg-289842


I am using nginx as a reverse proxy. All my requests goes to nginx and then go to application server. This works well.



I have requirement where from nginx, outbound request need to go to internet https proxy and then to some other service in AWS. Request flow is as follow



Browser --> WAF--> Nginx-->corporate https proxy --> AWS S3 (s3 streaming url).



My question is, is there a way nginx can proxy pass request to S3 via proxy server? If yes, how?



Let me know some config snippet. Thanks in Advance !!



Regards,

Shankar Borate | CoFounder & CTO | +91-8975761692

Start a Chat with me instantly… workApps.com/110 https://www.workapps.com/110

---------

Enterprise Messaging Platform for Banks, Insurance, Financial Services, Securities and Mutual Funds





From: nginx <nginx-bounces@nginx.org> On Behalf Of Kaushal Shriyan
Sent: 29 October 2020 23:42
To: nginx@nginx.org
Subject: Query on nginx. conf file regarding redirection.



Hi,



I have a specific query regarding the below /etc/nginx/nginx.conf file.



When I hit this URL http://219.11.134.114/test/_plugin/kibana/app/kibana on the browser it does not get redirected to https://vpc-lab-test-search-7hyay88a9kjuisl.eu-north-1.es.amazonaws.com/;



# TEST
server {
listen 81;
location /test {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
fastcgi_read_timeout 240;
proxy_pass https://vpc-lab-test-search-7hyay88a9kjuisl.eu-north-1.es.amazonaws.com/;
}
error_page 404 /404.html;
location = /40x.html {
}

error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}



Similarly, when I hit this URL http://219.11.134.114/prod/_plugin/kibana/app/kibana on the browser it does not get redirected to https://vpc-lab-prod-search-9aay182kkjoisl.eu-north-1.es.amazonaws.com/



# PROD

server {
listen 80;
location /prod {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
fastcgi_read_timeout 240;
proxy_pass https://vpc-lab-prod-search-9aay182kkjoisl.eu-north-1.es.amazonaws.com/;;
}
error_page 404 /404.html;
location = /40x.html {
}

error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}



Any help will be highly appreciated. Thanks in Advance and I look forward to hearing from you.



Best Regards,



Kaushal



_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx]]>
Anonymous User Nginx Mailing List - English Thu, 29 Oct 2020 19:34:11 -0400
http://www.ldmicj.icu/read.php?2,289841,289841#msg-289841 Query on nginx. conf file regarding redirection. (5 replies) http://www.ldmicj.icu/read.php?2,289841,289841#msg-289841
I have a specific query regarding the below /etc/nginx/nginx.conf file.

When I hit this URL http://219.11.134.114/test/_plugin/kibana/app/kibana on
the browser it does not get redirected to
https://vpc-lab-test-search-7hyay88a9kjuisl.eu-north-1.es.amazonaws.com/;

# TEST
server {
listen 81;
location /test {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
fastcgi_read_timeout 240;
proxy_pass
https://vpc-lab-test-search-7hyay88a9kjuisl.eu-north-1.es.amazonaws.com/;
}
error_page 404 /404.html;
location = /40x.html {
}

error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}

Similarly, when I hit this URL
http://219.11.134.114/prod/_plugin/kibana/app/kibana on the browser it does
not get redirected to
https://vpc-lab-prod-search-9aay182kkjoisl.eu-north-1.es.amazonaws.com/

# PROD
server {
listen 80;
location /prod {
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
fastcgi_read_timeout 240;
proxy_pass
https://vpc-lab-prod-search-9aay182kkjoisl.eu-north-1.es.amazonaws.com/;;
}
error_page 404 /404.html;
location = /40x.html {
}

error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}

Any help will be highly appreciated. Thanks in Advance and I look forward
to hearing from you.

Best Regards,

Kaushal
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx]]>
kaushalshriyan Nginx Mailing List - English Sat, 31 Oct 2020 18:58:02 -0400
http://www.ldmicj.icu/read.php?2,289840,289840#msg-289840 Transient, Load Related Slow response_time / upstream_response_time vs App Server Reported Times (2 replies) http://www.ldmicj.icu/read.php?2,289840,289840#msg-289840
I am hoping someone on the community list can help steer me in the right
direction for troubleshooting the following scenario:

I am running a cluster of 4 virtualized nginx open source 1.16.0 servers
with 4 vCPU cores and 4 GB of RAM each. They serve HTTP (REST API) requests
to a pool of about 40 different upstream clusters, which range from 2 to 8
servers within each upstream definition. The upstream application servers
themselves have multiple workers per server.

I've recently started seeing an issue where the reported response_time and
typically the reported upstream_response_time the nginx access log are
drastically different from the reported response on the application servers
themselves. For example, on some requests the typical average response_time
would be around 5ms with an upstream_response_time of 4ms. During these
transient periods of high load (approximately 1200 -1400 rps), the reported
nginx response_time and upstream_response_time spike up to somewhere around
1 second, while the application logs on the upstream servers are still
reporting the same 4ms response time.

The upstream definitions are very simple and look like:
upstream rest-api-xyz {
least_conn;
server 10.1.1.33:8080 max_fails=3 fail_timeout=30; #
production-rest-api-xyz01
server 10.1.1.34:8080 max_fails=3 fail_timeout=30; #
production-rest-api-xyz02
}

One avenue that I've considered but does not seem to be the case from the
instrumentation on the app servers is that they're accepting the requests
and queueing them in a TCP socket locally. However, running a packet
capture on both the nginx server and the app server actually shows the http
request leaving nginx at the end of the time window. I have not looked at
this down to the TCP handshake to see if the actual negotiation is taking
an excessive amount of time. I can produce this queueing scenario
artificially, but it does not appear to be what's happening in my
production environment in the scenario described above.

Does anyone here have any experience sorting out something like this? The
upstream_connect_time is not part of the log currently, but if that number
was reporting high, I'm not entirely sure what would cause that. Similarly,
if the upstream_connect_time does not account for most of the delay, is
there anything else I should be looking at?

Thanks
Jordan
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx]]>
Jordan von Kluck Nginx Mailing List - English Thu, 05 Nov 2020 21:08:06 -0500
http://www.ldmicj.icu/read.php?2,289823,289823#msg-289823 Nginx proxy_bind failing (4 replies) http://www.ldmicj.icu/read.php?2,289823,289823#msg-289823
I'm attempting to configure nginx to reverse proxy requests from (192.168.0.2:12345) the same Internal Host Address that it's listening from (192.168.0.2:443) on separate ports using the listen and proxy_bind directives.

# /opt/sbin/nginx -v
nginx version: nginx/1.19.2 (x86_64-pc-linux-gnu)

# cat nginx.conf
user admin root;
#user nobody;
worker_processes 1;

events {
worker_connections 64;
}

http {
# HTTPS server

server {
listen 192.168.0.2:443 ssl;
server_name z1.fm;

ssl_certificate /etc/cert.pem;
ssl_certificate_key /etc/key.pem;

proxy_ssl_server_name on;

ssl_session_cache shared:SSL:1m;
ssl_session_timeout 5m;

ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;

location / {
# root html;
# index index.html index.htm;
resolver 103.86.99.100;
# proxy_bind 192.168.0.2:12345;
proxy_bind $server_addr:12345;
# proxy_bind $remote_addr:12345 transparent;
proxy_pass $scheme://$host;
}
}
}

I've tried changing the "user admin root;" which is the root user for this router. I've tried using different combinations of "proxy_bind 192.168.0.2;", "proxy_bind 192.168.0.2 transparent;", "proxy_bind $server_addr;", and "proxy_bind $server_addr transparent;". None of them appear to work, when validating with tcpdump. nginx always uses the External WAN Address (100.64.8.236).

Ifconfig Output:

# ifconfig
br0 Link encap:Ethernet HWaddr C0:56:27:D1:B8:A4
inet addr:192.168.0.1 Bcast:192.168.0.255 Mask:255.255.255.0
UP BROADCAST RUNNING ALLMULTI MULTICAST MTU:1500 Metric:1
RX packets:10243803 errors:0 dropped:0 overruns:0 frame:0
TX packets:5440860 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:14614392834 (13.6 GiB) TX bytes:860977246 (821.0 MiB)

br0:0 Link encap:Ethernet HWaddr C0:56:27:D1:B8:A4
inet addr:192.168.0.2 Bcast:192.168.0.255 Mask:255.255.255.0
UP BROADCAST RUNNING ALLMULTI MULTICAST MTU:1500 Metric:1

vlan2 Link encap:Ethernet HWaddr C0:56:27:D1:B8:A4
inet addr:100.64.8.236 Bcast:100.64.15.255 Mask:255.255.248.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:1757588 errors:0 dropped:0 overruns:0 frame:0
TX packets:613625 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:2267961441 (2.1 GiB) TX bytes:139435610 (132.9 MiB)

Route Output:

# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
10.10.0.17 * 255.255.255.255 UH 0 0 0 tun12
89.38.98.142 100.64.8.1 255.255.255.255 UGH 0 0 0 vlan2
100.64.8.1 * 255.255.255.255 UH 0 0 0 vlan2
10.15.0.65 * 255.255.255.255 UH 0 0 0 tun11
192.168.2.1 * 255.255.255.255 UH 0 0 0 vlan3
51.68.180.4 100.64.8.1 255.255.255.255 UGH 0 0 0 vlan2
192.168.2.0 * 255.255.255.0 U 0 0 0 vlan3
192.168.0.0 * 255.255.255.0 U 0 0 0 br0
100.64.8.0 * 255.255.248.0 U 0 0 0 vlan2
127.0.0.0 * 255.0.0.0 U 0 0 0 lo
default 100.64.8.1 0.0.0.0 UG 0 0 0 vlan2

Tcpdump Output:

Client Remote_Addr (192.168.0.154:$port) == Request => Nginx Reverse Proxy Server - Listener (192.168.0.2:443)

07:19:06.840468 In c8:1f:66:13:a1:11 ethertype IPv4 (0x0800), length 62: 192.168.0.154.55138 > 192.168.0.2.443: Flags [.], ack 1582, win 8212, length 0
07:19:06.840468 In c8:1f:66:13:a1:11 ethertype IPv4 (0x0800), length 62: 192.168.0.154.55138 > 192.168.0.2.443: Flags [.], ack 1582, win 8212, length 0

Nginx Reverse Proxy Server - Listener (192.168.0.2:443) == Response => Client Remote_Addr (192.168.0.154:$port)

07:19:06.841377 Out c0:56:27:d1:b8:a4 ethertype IPv4 (0x0800), length 56: 192.168.0.2.443 > 192.168.0.154.55138: Flags [.], ack 1475, win 541, length 0
07:19:06.841411 Out c0:56:27:d1:b8:a4 ethertype IPv4 (0x0800), length 56: 192.168.0.2.443 > 192.168.0.154.55138: Flags [.], ack 1475, win 541, length 0

Nginx Reverse Proxy Server - Sender (100.64.8.236:12345) == Request => Upstream Desination Server - Listener (104.27.161.206:443)

07:19:11.885314 Out c0:56:27:d1:b8:a4 ethertype IPv4 (0x0800), length 76: 100.64.8.236.12345 > 104.27.161.206.443: Flags [S], seq 3472185855, win 5840, options [mss 1460,sackOK,TS val 331214 ecr 0,nop,wscale 4], length 0

Upstream Desination Server - Listener (104.27.161.206:443) == Response => Nginx Reverse Proxy Server - Sender (100.64.8.236:12345)

07:19:11.887683 In 02:1f:a0:00:00:09 ethertype IPv4 (0x0800), length 68: 104.27.161.206.443 > 100.64.8.236.12345: Flags [S.], seq 2113436779, ack 3472185856, win 65535, options [mss 1400,nop,nop,sackOK,nop,wscale 10], length 0

Note: The Nginx Reverse Proxy Server (Listener) and Nginx Reverse Proxy Server (Sender) MAC addresses are the same piece of hardware

07:19:06.840468 In c8:1f:66:13:a1:11 ethertype IPv4 (0x0800), length 62: 192.168.0.154.55138 > 192.168.0.2.443: Flags [.], ack 1582, win 8212, length 0
07:19:06.840468 In c8:1f:66:13:a1:11 ethertype IPv4 (0x0800), length 62: 192.168.0.154.55138 > 192.168.0.2.443: Flags [.], ack 1582, win 8212, length 0
07:19:06.841377 Out c0:56:27:d1:b8:a4 ethertype IPv4 (0x0800), length 56: 192.168.0.2.443 > 192.168.0.154.55138: Flags [.], ack 1475, win 541, length 0
07:19:06.841411 Out c0:56:27:d1:b8:a4 ethertype IPv4 (0x0800), length 56: 192.168.0.2.443 > 192.168.0.154.55138: Flags [.], ack 1475, win 541, length 0
07:19:11.885314 Out c0:56:27:d1:b8:a4 ethertype IPv4 (0x0800), length 76: 100.64.8.236.12345 > 104.27.161.206.443: Flags [S], seq 3472185855, win 5840, options [mss 1460,sackOK,TS val 331214 ecr 0,nop,wscale 4], length 0
07:19:11.887683 In 02:1f:a0:00:00:09 ethertype IPv4 (0x0800), length 68: 104.27.161.206.443 > 100.64.8.236.12345: Flags [S.], seq 2113436779, ack 3472185856, win 65535, options [mss 1400,nop,nop,sackOK,nop,wscale 10], length 0
07:19:11.887948 Out c0:56:27:d1:b8:a4 ethertype IPv4 (0x0800), length 56: 100.64.8.236.12345 > 104.27.161.206.443: Flags [.], ack 1, win 365, length 0
07:19:11.888854 Out c0:56:27:d1:b8:a4 ethertype IPv4 (0x0800), length 264: 100.64.8.236.12345 > 104.27.161.206.443: Flags [P.], seq 1:209, ack 1, win 365, length 208
07:19:11.890844 In 02:1f:a0:00:00:09 ethertype IPv4 (0x0800), length 62: 104.27.161.206.443 > 100.64.8.236.12345: Flags [.], ack 209, win 66, length 0
07:19:11.893154 In 02:1f:a0:00:00:09 ethertype IPv4 (0x0800), length 1516: 104.27.161.206.443 > 100.64.8.236.12345: Flags [.], seq 1:1461, ack 209, win 66, length 1460
07:19:11.893316 Out c0:56:27:d1:b8:a4 ethertype IPv4 (0x0800), length 56: 100.64.8.236.12345 > 104.27.161.206.443: Flags [.], ack 1461, win 548, length 0
07:19:11.893161 In 02:1f:a0:00:00:09 ethertype IPv4 (0x0800), length 1000: 104.27.161.206.443 > 100.64.8.236.12345: Flags [P.], seq 1461:2405, ack 209, win 66, length 944

Iptables Output:

# iptables -t mangle -I PREROUTING -i vlan2 -p tcp -m multiport --dport 12345 -j MARK --set-mark 0x2000/0x2000
# iptables -t mangle -I POSTROUTING -o vlan2 -p tcp -m multiport --sport 12345 -j MARK --set-mark 0x8000/0x8000

Note: Packets are matching and being marked, but not being routed to the appropriate interfaces. I'm thinking it may be too late in the pipe.

# iptables -t mangle -L -v -n
Chain PREROUTING (policy ACCEPT 5506K packets, 8051M bytes)
pkts bytes target prot opt in out source destination
33 15329 MARK tcp -- vlan2 * 0.0.0.0/0 0.0.0.0/0 multiport dports 12345 MARK or 0x2000

Chain POSTROUTING (policy ACCEPT 2832K packets, 171M bytes)
pkts bytes target prot opt in out source destination
30 4548 MARK tcp -- * vlan2 0.0.0.0/0 0.0.0.0/0 multiport sports 12345 MARK or 0x8000

The reverse proxied requests make it to the destination and back, but using the External WAN Address (100.64.8.236:12345) and not the Internal Host Address (192.168.0.2:12345).

The proxy_bind directive just seems to be failing.

Any ideas?

Thanks!


Gary]]>
garycnew@yahoo.com Nginx Mailing List - English Sat, 31 Oct 2020 08:16:41 -0400
http://www.ldmicj.icu/read.php?2,289820,289820#msg-289820 Re: nginx Digest, Vol 132, Issue 24 (1 reply) http://www.ldmicj.icu/read.php?2,289820,289820#msg-289820 and where i can get license..

On Wed, 28 Oct 2020, 3:00 pm , <nginx-request@nginx.org> wrote:

> Send nginx mailing list submissions to
> nginx@nginx.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://mailman.nginx.org/mailman/listinfo/nginx
> or, via email, send a message with subject or body 'help' to
> nginx-request@nginx.org
>
> You can reach the person managing the list at
> nginx-owner@nginx.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of nginx digest..."
>
>
> Today's Topics:
>
> 1. nginx-1.19.4 (Maxim Dounin)
> 2. upstream SSL certificate does not match "x.x.x.x" (bouvierh)
> 3. Nginx logging phase (Vikas Kumar)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Tue, 27 Oct 2020 18:26:06 +0300
> From: Maxim Dounin <mdounin@mdounin.ru>
> To: nginx@nginx.org
> Subject: nginx-1.19.4
> Message-ID: <20201027152606.GD50919@mdounin.ru>
> Content-Type: text/plain; charset=us-ascii
>
> Changes with nginx 1.19.4 27 Oct
> 2020
>
> *) Feature: the "ssl_conf_command", "proxy_ssl_conf_command",
> "grpc_ssl_conf_command", and "uwsgi_ssl_conf_command" directives.
>
> *) Feature: the "ssl_reject_handshake" directive.
>
> *) Feature: the "proxy_smtp_auth" directive in mail proxy.
>
>
> --
> Maxim Dounin
> http://nginx.org/
>
>
> ------------------------------
>
> Message: 2
> Date: Wed, 28 Oct 2020 00:28:04 -0400
> From: "bouvierh" <nginx-forum@www.ldmicj.icu>
> To: nginx@nginx.org
> Subject: upstream SSL certificate does not match "x.x.x.x"
> Message-ID:
> <
> e2daca895c920269e52b2b369cea2fa9.NginxMailingListEnglish@www.ldmicj.icu>
>
> Content-Type: text/plain; charset=UTF-8
>
> Hello,
>
> I have a configuration an nginx proxy server "NGINX_SERVER" as the
> following:
> listen 443 ssl default_server;
>
> chunked_transfer_encoding on;
>
> ssl_certificate server.crt;
> ssl_certificate_key private_key_server.pem;
> ssl_client_certificate trustedCA.crt;
> #ssl_verify_depth 7;
> ssl_verify_client optional_no_ca;
>
> location / {
> proxy_http_version 1.1;
> resolver 127.0.0.11;
> proxy_ssl_trusted_certificate trustedCA.crt;
> proxy_ssl_verify_depth 7;
> proxy_ssl_verify on;
> proxy_pass https://13.78.229.75:443;
> }
>
> The server "13.78.229.75" has a server certificate generate for an IP. When
> I do
> curl --cacert trustedCA.crt https://13.78.229.75:443 -v
> from "NGINX_SERVER", everything works fine. So the server certificate from
> "13.78.229.75" should be good.
> Additionnally openssl s_client -connect 13.78.229.75:443 -showcerts
> -verify
> 9 -CAfile trustedCA.crt is good too.
>
> However when I try to curl my "NGINX_SERVER":
> curl https://"NGINX_SERVER
> I get:
> *110 upstream SSL certificate does not match "13.78.229.75" while SSL
> handshaking to upstream, client: 13.78.128.54, server: , request:
>
> Looking at the server certificate, everything looks ok:
> Subject: CN = 13.78.229.75
> X509v3 Subject Alternative Name:
> IP Address:13.78.229.75, DNS:iotedgeapiproxy
>
> I am at loss. How can curl/openssl tell me my server cert is valid while
> nginx telling me it is wrong. What am I doing wrong?
> Thank you!
> Hugues
>
> Posted at Nginx Forum:
> http://www.ldmicj.icu/read.php?2,289813,289813#msg-289813
>
>
>
> ------------------------------
>
> Message: 3
> Date: Wed, 28 Oct 2020 14:14:40 +0530
> From: Vikas Kumar <hershil@gmail.com>
> To: nginx@nginx.org
> Subject: Nginx logging phase
> Message-ID:
> <CAExPge647t7PWvdYrSNzivprrjYHHyrAj11opCe=
> qoD30QGr_w@mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> I'm writing an Nginx plugin (using Openresty Lua) which increments a
> counter when a request is received (in ACCESS phase) and decrements the
> counter when request is processed (in LOG phase) in order to keep track of
> in-flight requests.
>
> I've seen some cases where the counter increments but does not decrement
> and reaches a very high value, but can't reproduce. The core of my logic
> depends on the accurate value of the in-flight requests counter.
>
> I wanted to ask if there are any cases where, for a request, ACCESS phase
> is called and LOG phase is not called.
>
> I can paste the relevant code if required.
>
> Thanks.
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://mailman.nginx.org/pipermail/nginx/attachments/20201028/f7c7b119/attachment-0001.htm
> >
>
> ------------------------------
>
> Subject: Digest Footer
>
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
> ------------------------------
>
> End of nginx Digest, Vol 132, Issue 24
> **************************************
>
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx]]>
Hiwa Nginx Mailing List - English Wed, 28 Oct 2020 21:30:01 -0400
http://www.ldmicj.icu/read.php?2,289816,289816#msg-289816 Nginx logging phase (3 replies) http://www.ldmicj.icu/read.php?2,289816,289816#msg-289816 counter when a request is received (in ACCESS phase) and decrements the
counter when request is processed (in LOG phase) in order to keep track of
in-flight requests.

I've seen some cases where the counter increments but does not decrement
and reaches a very high value, but can't reproduce. The core of my logic
depends on the accurate value of the in-flight requests counter.

I wanted to ask if there are any cases where, for a request, ACCESS phase
is called and LOG phase is not called.

I can paste the relevant code if required.

Thanks.
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx]]>
Vikas Kumar Nginx Mailing List - English Thu, 29 Oct 2020 11:52:13 -0400
http://www.ldmicj.icu/read.php?2,289813,289813#msg-289813 upstream SSL certificate does not match "x.x.x.x" (2 replies) http://www.ldmicj.icu/read.php?2,289813,289813#msg-289813
I have a configuration an nginx proxy server "NGINX_SERVER" as the following:
listen 443 ssl default_server;

chunked_transfer_encoding on;

ssl_certificate server.crt;
ssl_certificate_key private_key_server.pem;
ssl_client_certificate trustedCA.crt;
#ssl_verify_depth 7;
ssl_verify_client optional_no_ca;

location / {
proxy_http_version 1.1;
resolver 127.0.0.11;
proxy_ssl_trusted_certificate trustedCA.crt;
proxy_ssl_verify_depth 7;
proxy_ssl_verify on;
proxy_pass https://13.78.229.75:443;
}

The server "13.78.229.75" has a server certificate generate for an IP. When I do
curl --cacert trustedCA.crt https://13.78.229.75:443 -v
from "NGINX_SERVER", everything works fine. So the server certificate from "13.78.229.75" should be good.
Additionnally openssl s_client -connect 13.78.229.75:443 -showcerts -verify 9 -CAfile trustedCA.crt is good too.

However when I try to curl my "NGINX_SERVER":
curl https://"NGINX_SERVER
I get:
*110 upstream SSL certificate does not match "13.78.229.75" while SSL handshaking to upstream, client: 13.78.128.54, server: , request:

Looking at the server certificate, everything looks ok:
Subject: CN = 13.78.229.75
X509v3 Subject Alternative Name:
IP Address:13.78.229.75, DNS:iotedgeapiproxy

I am at loss. How can curl/openssl tell me my server cert is valid while nginx telling me it is wrong. What am I doing wrong?
Thank you!
Hugues]]>
bouvierh Nginx Mailing List - English Thu, 29 Oct 2020 15:03:23 -0400
澳门金沙1005app