Quantcast
Channel: Nginx Forum - Other discussion
Viewing all 972 articles
Browse latest View live

No shared cipher

$
0
0
Not sure if it's not more of an openssl/TLS 'issue'/question...
For some time I've been observing

SSL_do_handshake() failed (SSL: error:1408A0C1:SSL routines:ssl3_get_client_hello:no shared cipher) while SSL handshaking

in error.log while having

ssl_protocols SSLv2 SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ALL:!aNULL;

in configuration.

Examining Client Hello packet reveals client supported ciphers:
Cipher Suites (9 suites)
Cipher Suite: TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (0xcca8)
Cipher Suite: TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (0xcc13)
Cipher Suite: TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (0xc02f)
Cipher Suite: TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (0xc014)
Cipher Suite: TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (0xc013)
Cipher Suite: TLS_RSA_WITH_AES_128_GCM_SHA256 (0x009c)
Cipher Suite: TLS_RSA_WITH_AES_256_CBC_SHA (0x0035)
Cipher Suite: TLS_RSA_WITH_AES_128_CBC_SHA (0x002f)
Cipher Suite: TLS_RSA_WITH_3DES_EDE_CBC_SHA (0x000a)

I'm using
nginx version: nginx/1.12.1
built by gcc 4.8.5 20150623 (Red Hat 4.8.5-11) (GCC)
built with OpenSSL 1.0.2k-fips 26 Jan 2017
TLS SNI support enabled

According to 'openssl ciphers' the third cipher on the list is supported and yet server responds with:
TLSv1.2 Record Layer: Alert (Level: Fatal, Description: Handshake Failure)
Content Type: Alert (21)
Version: TLS 1.2 (0x0303)
Length: 2
Alert Message
Level: Fatal (2)
Description: Handshake Failure (40)

Either I've messed up my investigation or I'm completely misunderstanding something here.
Why despite having a common cipher with a client server denies to handshake a connection?

Does NGINX as API gateway support websocket connections from API clients

$
0
0
Hi,

I have some services exposed by the backend service over websocket. If I want to put a API gateway in front of backend service, then my clients need to connect to the API gateway over websocket and the API gateway would then act as proxy for the clients by carrying the request forward with the backend service over websocket. Is this supported/possible using NGINX API gateway?

NGINX: PB DOWNLOAD file by REMOVING ONLY the EXTENSION OF FILES PHP ON ALL THE SITE

$
0
0
Hello World,

Im' not English but I have a problem...
Sorry if my traduction is not good perfect :D

Then here is I am confronted with a small problem enough génant.
I created a web site accommodated on a server Debian containing Nginx. I have nottament made by the rewriting of URL and the PHP works well. (PHP5-FPM)

My problem is that when I deprive the suffix of the .php of my URL in all the web site of an oage .php, it downloads the page without extension...
This problem is very annoying!!

Have you a solution?

Thank's !

Http 404 error for media links on Nginx

$
0
0
I need to show media (images & videos) links from external cdn from my vps that has nginx web server. My summary config on nginx.conf :

upstream video_balancer {
server res.cloudinary.com;
}

server {
listen 80;
server_name video.XXXXX.com;
return 301 https://$host$request_uri;
}

server {
listen 443 ssl;
ssl on;
server_name video.XXXXX.com;
root /var/www/video/html/;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
ssl_certificate /etc/nginx/ssl/XXXXX.crt;
ssl_certificate_key /etc/nginx/ssl/XXXXX.key;
ssl_session_cache shared:SSL:1m;
ssl_prefer_server_ciphers on;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_verify_client off;

location / {
try_files $uri $uri/ 404;
proxy_method GET;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Port 443;
proxy_set_header X-Forwarded-Ssl on;
proxy_set_header X-Forwarded-HTTPS "on";
proxy_ssl_session_reuse off;
proxy_http_version 1.1;
proxy_pass http://video_balancer$request_uri;
}
}


When I try sample below link, I got 404 Not Found error:

https://video.XXXXX.com/mediaclub/video/upload/v1527154870/news.mp4

and related access logs:

X.X.X.X - - [24/May/2018:14:53:42 +0430] "GET /mediaclub/video/upload/v1527154870/news.mp4 HTTP/1.1" 404 200 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.181 Safari/537.36" "-"


How can I resolve the issue?

nginx https reverse proxy with client certificate. Inbox x

$
0
0
All,

I am trying to use NGINX as reverse proxy for https backend servers


Client <-------> NGINX <-------> backend

NGINX proxy accepts only ssl connections on 443

Proxy's NGINX conf:

http {
server {
listen 443;
ssl on;
include snippets/self-signed.conf;
include snippets/ssl-params.conf;
# client certificate
ssl_client_certificate /etc/nginx/client_certs/ca.crt;

ssl_verify_client optional;

location / {
if ($ssl_client_verify != SUCCESS) {
return 403;
}
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
#proxy_set_header Host $http_host;
add_header Front-End-Https on;
if ($host = 'secure_backend' ) {
proxy_pass https://https_backend:443;
}

if ($host = 'backend' ) {
proxy_pass http://http_backend;
}

proxy_redirect off;
proxy_ssl_verify off;
add_header Front-End-Https on;
proxy_cache off;

proxy_http_version 1.1;
proxy_read_timeout 90;
} # /location /
}

I can succefully:

http://backend (client authenticated with proxy and passed using http to backend)

http://secure_backend (client authenticated with proxy and passed using https to https_backend)


However I am unable to:

https://secure_backend

access log:
CONNECThttps_backend:443 HTTP/1.1" 400 182 "-" "-"

error log:
2018/06/03 18:32:22 [warn] 754#754: "ssl_stapling" ignored, issuer certificate not found
2018/06/03 18:32:27 [warn] 920#920: "ssl_stapling" ignored, issuer certificate not found
2018/06/03 18:32:27 [debug] 923#923: epoll add event: fd:8 op:1 ev:00002001
2018/06/03 18:32:27 [debug] 923#923: epoll add event: fd:10 op:1 ev:00002001


Any Ideads are appreciated.

BR
Itamar

nginx reverse proxy to access LXD containers?

$
0
0
So, I have a VPS of Ubuntu 16.04.4 with 4 LXD containers.

Each container has to be accessed over the internet. How can the reverse proxy help me?

So if my VPS is accessed as http://www.myserver.com Ideally I'd like to be able to use this format for accessing the internal LXD containers:

http://www.myserver.com/joe to access container 1
http://www.myserver.com/frank to access container 2
http://www.myserver.com/pete to access container 3
http://www.myserver.com/bill to accesss container 4

Is this possible with nginx as a reverse proxy?

Thanks,

Ray

Re: nginx reverse proxy to access LXD containers?

$
0
0
Here is a little more detail in the attached jpg.

4 LXD containers were created on the Ubuntu 16.04 VPS.

So I want to be able to access a container based on the incoming query to the VPS.

If incoming query is http://x.x.x.x/LPC1 nginx should route to container 1. If LPC2, container 2, etc.

The containers are all app servers, no websites.

And I am not sure I formatted the incoming query ( http://x.x.x.x/LPC1) correctly?

Thanks,

Ray

RTMP Module - [alert] 6031#0 too big RTMP chunk size:134217728

$
0
0
First question: are these alert #'s (6031) defined anywhere? I could not find it.

I am using OBS to send a stream to the rtmp server. My OBS stream setting is: rtmp://192.168.0.47:1935/LPC1

I want the rtmp server to relay/redirect the stream to the real Media server
at 10.57.215.214 but I am getting this error every 3 seconds in the nginx error.log:

2018/06/07 19:14:20 [alert] 6031#0: *159 too big RTMP chunk size:134217728, client: 10.57.215.214:1935/live, server: ngx-relay


rtmp {
server {
listen 1935;
chunk_size 4096;
application live {
live on;
record off;
}
application vod {
play /var/www/html/media/vod;
}
application LPC1 {
live on;
push rtmp://10.57.215.214:1935/live;
sync 300ms;
}
application vod_rpt {
play_local_path /var/www/html/media/vod_rpt_cache;
play /var/www/html/media/vod_rpt_cache http://192.168.0.4/media/vod;
sync 300ms;
}
}
}

Any idea what could be wrong?

Thanks,

Ray

Are Multiple servers of the same type allowed in nginx.conf?

$
0
0
I am wondering if it is legal to have multiple servers of the same type. In my case, I need multiple rtmp servers
that service specific users. Rather than having multiple physical machines, I am thinking LXD containers would be nice to use. Each container has a media server.

I am using OBS on my PC (192.168.0.5) to stream rtmp. I am using a Vboxt PC VM 192.168.0.47 Ubuntu 16.04 to
host the two containers which are also Ubuntu 16.04.

So, initially I tried with only the push directive. That did not work as I was getting chunk size failures all over the place.
Apparently the rtmp module is not being supported any longer? Arun does not respond to my emails. This is a bummer!

But anyway, I'm thinking I can configure the OBS stream settings as such: rtmp://192.168.0.47:1935/LPC1,
or rtmp://192.168.0.47:1936/LPC2 and the rtmp will be routed to the proper container using iptables.

So now I am using iptables to forward incoming rtmp to specific containers. Here are the iptables commands:

iptables -t nat -A PREROUTING -p tcp -i enp0s3 --dport 1935 -j DNAT --to-destination 10.57.215.214 :1935
iptables -t nat -A PREROUTING -p tcp -i enp0s3 --dport 1936 -j DNAT --to-destination 10.57.215.247 :1935

This works for the first route....input port 1935 gets routed to the proper container port 1935.
However, if I try to stream to port 1936, it also gets routed to the first container rather than the second one.

Here is my .conf:


#user nobody;
worker_processes 2;

error_log logs/error.log;
error_log logs/error.log notice;
error_log logs/error.log info;
error_log logs/debug.log debug;

#pid logs/nginx.pid;


events {
worker_connections 1024;
}

rtmp {
server {
listen 1935;
ping 30s;
ping_timeout 10s;
chunk_size 4096;

application LPC1 {
live on;
# push rtmp://10.57.215.214:1935;

}
}
}

rtmp {
server {
listen 1936;
ping 30s;
ping_timeout 10s;
chunk_size 4096;

application LPC2 {
live on;
# push rtmp://10.57.215.247:1935;

}
}
}

And here is the netstat output. As you can see, both ports are being listened to:

sudo netstat -tulpn
[sudo] password for ray:
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1011/sshd
tcp 0 0 0.0.0.0:1935 0.0.0.0:* LISTEN 5752/nginx
tcp 0 0 0.0.0.0:1936 0.0.0.0:* LISTEN 5752/nginx
tcp 0 0 10.57.215.1:53 0.0.0.0:* LISTEN 1481/dnsmasq
tcp6 0 0 :::22 :::* LISTEN 1011/sshd
tcp6 0 0 fd42:2bf8:ea23:1a35::53 :::* LISTEN 1481/dnsmasq
tcp6 0 0 fe80::78af:32ff:fe9e:53 :::* LISTEN 1481/dnsmasq
udp 0 0 10.57.215.1:53 0.0.0.0:* 1481/dnsmasq
udp 0 0 0.0.0.0:67 0.0.0.0:* 1481/dnsmasq
udp 0 0 0.0.0.0:68 0.0.0.0:* 823/dhclient
udp6 0 0 :::547 :::* 1481/dnsmasq
udp6 0 0 fd42:2bf8:ea23:1a35::53 :::* 1481/dnsmasq
udp6 0 0 fe80::78af:32ff:fe9e:53 :::* 1481/dnsmasq

Any idea why this is not working? Why is port 1936 going to 10.57.215.214 rather than 10.57.215.247?

Thanks, this is driving me crazy!

Ray

Nginx reverse proxy then Apache then Prestashop

$
0
0
hello there

got in trouble with nginx as dedicated reverse proxy and dedicated apache running Prestashop.

my confs are like this :

nginx sites-available :
server {
listen 80;
server_name naru.kii.net;
return 301 https://naru.kii.net$request_uri;
}

server {
listen 443;
server_name naru.kii.net;

error_log /var/log/nginx/naru.access.log;

ssl on;
ssl_certificate /etc/nginx/ssl/kii_net/ssl-bundle.crt;
ssl_certificate_key /etc/nginx/ssl/kii_net/_kii_net.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

set $upstream 192.168.1.5;

location / {
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass_header Authorization;
proxy_pass http://$upstream;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_buffering off;
client_max_body_size 0;
proxy_read_timeout 36000s;
proxy_redirect off;
}
}

apache sites-available :
<VirtualHost *:80>
DocumentRoot "/var/www/naru"
ServerName naru.kii.net
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
<Directory /var/www/naru/>
Options +FollowSymlinks
AllowOverride All
<IfModule mod_dav.c>
Dav off
</IfModule>
<IfModule mod_headers.c>
Header always set Strict-Transport-Security "max-age=15555000; includeSubDomains"
</IfModule>
SetEnv HOME /var/www/naru
SetEnv HTTP_HOME /var/www/naru
</Directory>
</VirtualHost>

apache sites-available ssl :
<VirtualHost *:443>
DocumentRoot "/var/www/naru"
ServerName naru.kii.net
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
<Directory /var/www/naru/>
Options +FollowSymlinks
AllowOverride All
<IfModule mod_dav.c>
Dav off
</IfModule>
<IfModule mod_headers.c>
Header always set Strict-Transport-Security "max-age=15555000; includeSubDomains"
</IfModule>
SetEnv HOME /var/www/naru
SetEnv HTTP_HOME /var/www/naru
</Directory>
</VirtualHost>

any idea that could help? I'm presently stuck

Performance between in Apache and Nginx.

$
0
0
Hi,

We had taken performance report for Nginx and Apache using httperf tool and Jmeter. We have seen there is no considerable difference between them.

Some case Apache is better compare to Nginx. We dont know whether we done performance in right environment or not. So we had attached our test environment and report.

Kindly assist us to conform Nginx is better than Apache.

Thanks.

Re: Nginx is very slow

$
0
0
I am new to Nginx.

I have a brand new installation of Nginx on my Windows Server 2012 R2.

This is a stable Windows version of Nginx from Kevin Worthington, 1.14.0 Release of Nginx for Windows.
Built by Kevin Worthington on Windows 7 Ultimate 64-bit using Cygwin.

At first Nginx was very fast without PHP and MySQL, I was very happy.

But after I installed PHP version 5.6.31 and MySQL version: 8.0.11 I noticed that Nginx is surprisingly slow with PHP scripts.

I have to wait for 1 minute or more to open my website http://tayskaya-kosmetika.ru which is an exact copy of my website https://superbank.ru

The original website https://superbank.ru is hosted on another server running Apache web server and is fully loaded in just 5 seconds.

What is wrong with my Nginx, PHP, MySQL setup?

The logs are attached.

If you need any other logs please let me know.

I am also concerned about the alert in the Nginx Error log:
[alert] 384#0: recvmsg() returned invalid ancillary data level 1 or type 0

What shall I do to fix this problem?

Re: Nginx is very slow

Re: Nginx is very slow

$
0
0
I want to update my previous post.

It appears that NGINX server is not responsible for this terrible slowness of my website.

I have just switched my website, http://tayskaya-kosmetika.ru , to the integrated IIS server on my VDS hosting Windows Server 2012 R2.

Again it takes about 1 or 2 minutes to open the website. Probably this is the issue with PHP - FastCGI and system resources being limited by the VDS hosting provider. Anyway it is declared as 2 cores of Intel Xeon E5 @ 2.2 GHz with 2 Gb of RAM and 40 Gb SSD disk.
The size of MySQL database is 226 MiB with the total of 2.2 Million rows altogether.

The exact copy of the same website, https://superbank.ru , hosted on a Shared Hosting with another provider is fully loaded in just 5 seconds!

But the hosting provider is complaining that my website is consuming too much MySQL time. That's why I am thinking of VDS /VPS hosting instead of Shared Hosting.

Re: Nginx is very slow

$
0
0
With any cgi application (like php is) you need to create a pool to handle high loads as cgi is considered blocking, for Windows I have created a long time ago a pool configuration of 10 including installation, see sig for details.

Re: Nginx is very slow

$
0
0
Thank you for your advice. I will study this subject.

Subdomains and certbot

$
0
0
Hello,

I learned last year how to work with the small things of nginx, since then I liked to learn more about it. Currently, I'm working with SSL certs. I have it working well on my main domain.

I followed this tutorial at the beginning to do this: https://www.humankode.com/ssl/how-to-set-up-free-ssl-certificates-from-lets-encrypt-using-docker-and-nginx

What my issue is, I have subdomains next to my main domain. My subdomains are connected by giving the url and portnumber. I figured out by doing research what I have to do with the configuration, and can set my minds to that. But, I work with the Docker environment.. Is here anyone that works with Docker? I found many things that could help me in the setup. Basically, I think that I need to do the same steps as for the main domain, exept with the sub's address.

The tutorial above is working fine, my main domain runs on ssl, before without that.. so it is reachable.
Inside the Docker container, I received an error:

Failed authorization procedure. www.subdomain.example.com (http-01): urn:ietf:params:acme:error:unauthorized :: The client lacks sufficient authorization :: Invalid response from http://www.subdomain.example.com

And so ever. It says that the subdomain is not reachable, and that I have to check the A/AAAA records.
How can I make this working?

You can find my attachments of example code in this topic.
Thanks in advance,

Colin

Re: Subdomains and certbot

$
0
0
Nevermind, I solved my own problem.

Simple web socket DDOS

$
0
0
When running:

[CODE]
websocket-bench -a 2500 -c 200 wss://s.example.it
[/CODE]
from: https://github.com/M6Web/websocket-bench

On my NGINX server running:

[CODE]
upstream sock {
server 127.0.0.1:1203 fail_timeout=1s;
}

limit_req_zone $binary_remote_addr zone=mylimit:1m rate=1r/s;

server {
listen 443 ssl;

server_name s.example.it;

ssl on;
ssl_certificate /etc/nginx/certs/s.example.it/fullchain.pem;
ssl_certificate_key /etc/nginx/certs/s.example.it/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/nginx/certs/dhparam.pem;


location / {
limit_req zone=mylimit;
proxy_redirect off;
proxy_next_upstream error timeout http_502;
proxy_pass http://sock;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header X-Real-IP $remote_addr;
proxy_read_timeout 7d;
}
}
[/CODE]

My server completely crashes and an NGINX process reaches 100% cpu. I have no idea what is going on and this is a really terrifying vulnerability!

High response times (cache hits) for static file caching

$
0
0
We have a freshly installed Nginx server, setup as proxy cache.

Given the fact that we are caching all files to SSD, we experience relatively slow response times (we're talking about cache hit response times).

In our logs we find a lot of '0.000 seconds' (or close to it) response times (which is what we would expect). However, we also find quite some response times varying from just below or way above a second.

Does anybody have an idea how to figure out what causes the high cache-hit response times.

Attached with this post you will find our configuration and a screenshot from Amplify showing our performance stats.
Viewing all 972 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>