Article From:https://www.cnblogs.com/rinack/p/9967482.html

What is Nginx?

Nginx (engine x) It is a lightweight Web server, reverse proxy server and e-mail (IMAP/POP3) proxy server.

What is reverse proxy?

Reverse Proxy means that a proxy server receives a connection request on the internet, forwards the request to the server on the internal network, and returns the result from the server to the client requesting the connection on the internet.The proxy server is a reverse proxy server.

Installation and use

install

nginxOfficial download address: http://nginx.org, the release version is divided into Linux and Windows versions.

You can also download the source code and run it after compiling.

Compiling Nginx from source code

After decompressing the source code, run the following commands in the terminal:

$ ./configure
$ make
$ sudo make install

By default, Nginx is installed at / usr / local / nginx. By setting compilation options, you can change this setting.

Windows install

To install Nginx/Win32, you need to download it first. Then decompress it and run it. Following is an example of C disk root directory:

cd C:
cd C: ginx-0.8.54   start nginx

Nginx / Win32 It runs in a console program, not in Windows service mode. Server mode is still under development.

Use

nginx It’s easy to use, just a few commands.

Commonly used commands are as follows:

  • nginx -s stop :Quickly shut down Nginx, may not save relevant information, and quickly terminate the web service.

  • nginx -s quit :Close Nginx smoothly, save relevant information, and end web services on schedule.

  • nginx -s reload :Because the configuration of Nginx is changed, the configuration needs to be reloaded and overloaded.

  • nginx -s reopen :Open the log file again.

  • nginx -c filename :Specify a configuration file for Nginx instead of the default.

  • nginx -t :Instead of running, just test the configuration file. Nginx checks the syntax of the configuration file and tries to open the file referenced in the configuration file.

  • nginx -v:Display the version of nginx.

  • nginx -V:Display the version of nginx, compiler version and configuration parameters.

If you don’t want to hit commands every time, you can add a startup. bat startup file to the nginx installation directory and double-click it to run. The contents are as follows:

@echo off
rem If nginx is started before startup and the PID file is recorded, the process is killedNginx.exe-s stop

rem Test the syntax correctness of configuration filesNginx.exe-t -c conf/nginx.conf

rem display version informationNginx.exe-v

rem Start nginx according to the specified configurationNginx.exe-c conf/nginx.conf

If you run under Linux, write a shell script, much the same.

nginx Configuration of actual combat

I always believe that the configuration of various development tools or combined with the actual situation to tell, will make people more understandable.

httpReverse proxy configuration

Let’s start with a small goal: to complete an HTTP reverse proxy without considering complex configuration.

nginx.conf The configuration file is as follows:

Note: conf/nginx.conf is the default configuration file for nginx. You can also specify your configuration file using nginx-c

#Running user#user somebody;# Start the process, usually set to equal the number of CPUsWorker_processes1;

#Global error logError_log D:/Tools/nginx-1.10.1/logs/error.log;
error_log  D:/Tools/nginx-1.10.1/logs/notice.log  notice;
error_log  D:/Tools/nginx-1.10.1/logs/info.log  info;

#PIDFile to record the process ID of the currently started nginxPID D:/Tools/nginx-1.10.1/logs/nginx.pid;

#Working mode and upper limit of connection numberEvents {Worker_connections1024;    #Maximum number of concurrent links for a single background worker process}Setting up HTTP server to provide load balancing support with its reverse proxy functionHttp {# Set the MIME type (mail support type) by MiMe.types file definitionInclude D:/Tools/nginx-1.10.1/conf/mime.types;
   default_type  application/octet-stream;
   
   #Setup logLog_format main'[$remote_addr] - [$remote_user] [$time_local] "$request" '
                     '$status $body_bytes_sent "$http_referer" '
                     '"$http_user_agent" "$http_x_forwarded_for"';
                     
   access_log    D:/Tools/nginx-1.10.1/logs/access.log main;
   rewrite_log     on;
   
   #sendfile Instruction specifies whether nginx calls sendfile function (zero copy mode) to output files, for general applications.# must be set to on, if used for downloading applications such as disk IO overload applications, can be set to off, to flatHeng disk and network I/OProcessing speed, reduce uptime of the system.Sendfile on;#tcp_nopush on;Timed connection timeKeepalive_timeouT120;
   tcp_nodelay        on;
   
   #gzipCompression switch#gzip on;# Set the actual list of serversUpstream zp_server1{Server127.0.0.1:8089;
   }

   #HTTPThe serverServer {Listen on port 80, which is a well-known port number for HTTP protocolListen80;
       
       #Define access using www.xx.comServer_name www.cnblogs.cn;Home pageIndex index.html# Directory pointing to webappRood D:_Workspace Project githubzp Spring Notes pring-securityspring-shirosrcmainwebapp;
       
       #Encoding formatCharset UTF-8;
       
       #Agent configuration parametersProxy_connect_timeout180;
       proxy_send_timeout 180;
       proxy_read_timeout 180;
       proxy_set_header Host $host;
       proxy_set_header X-Forwarder-For $remote_addr;

       #The path of the reverse proxy (bound to upstream), and the path of the mapping after locationLocation/ {
           proxy_pass http://zp_server1;
       } 

       #Static file, nginx handles itselfLocation~ ^/(images|javascript|js|css|flash|media|static)/ {
           root D:_WorkspaceProjectgithubzpSpringNotesspring-securityspring-shirosrcmainwebappiews;
           #After 30 days, the static files are not updated very much. If they are overdue, they can be set a little larger. If they are updated frequently, they can be set a little smaller.Expires 30d;}Setting up a view NginAddress of X statusLocation/NginxStatus {
           stub_status           on;
           access_log            on;
           auth_basic            "NginxStatus";
           auth_basic_user_file  conf/htpasswd;
       }
   
       #Disallow access to. htxxx filesLocation~ /.ht {
           deny all;
       }
       
       #Error handling page (optional configuration)#error_page404              /404.html;
       #error_page   500 502 503 504  /50x.html;
       #location = /50x.html {
       #    root   html;
       #}
   }
}

Well, let’s try it.

Start webapp, and note that the port to start the binding is consistent with the port set by upstream in nginx.

Change host:

Add a DNS record 127.0.0.1 www.cnblogs.cn to the host file in the C: Windows System32 drivers etc directory to start the startup.bat command in the preceding article

Access to www.cnblogs.cn in the browser, no surprise, can already be accessed.

Load balancing configuration

In the previous example, the proxy pointed to only one server.

However, in the actual operation of the website, most of the servers are running the same app, which needs to use load balancing to shunt.

nginxSimple load balancing functions can also be implemented.

Let’s assume an application scenario where the application is deployed on three Linux servers: 192.168.1.11:80, 192.168.1.12:80 and 192.168.1.13:80. The domain name of the website is www.cnblogs.c.N, public IP 192.168.1.11. Deploy nginx on the server where the network IP is located, and load balance all requests.

nginx.conf The configuration is as follows:

http {
    #Set the mime type, which is defined by the mime. type fileInclude/etc/nginx/mime.types;
   default_type  application/octet-stream;
   #Set log formatAccess_log/var/log/nginx/access.log;

   #Setting the list of servers for load balancingUpstream load_balance_server{The # weigth parameter represents the weight, and the higher the weight, the greater the probability of being assigned.Server192.168.1.11:80   weight=5;
       server 192.168.1.12:80   weight=1;
       server 192.168.1.13:80   weight=6;
   }

  #HTTPThe serverServer {Intercept 80 portsListen80;
       
       #Define access using www.xx.comServer_name www.cnblogs.cn;# Load balancing requests for all requestsLocation/ {
           root        /root;                 #Define the default site root location for the serverIndex index. HTML index. htm; # Defines the name of the index file on the home pageProxy_passHttp://load_balance_server ;#Request to turn to the list of servers defined by load_balance_server

           #Here are some configuration of the reverse proxy (optional configuration)#proxy_redirect off;Proxy_set_header Host $host;Proxy_set_header X-Real-IP $remote_addr;
           #Back-end Web servers can use X-Forwarded-ForGet user real IPProxy_set_header X-Forwarded-For $remote_addr;
           proxy_connect_timeout 90;          #nginxConnection timeout with back-end server (proxy connection timeout)Proxy_send_timeout90;             #Back-end server data return time (proxy send timeout)Proxy_read_timeout90;             #When the connection is successful, the response time of the back-end server (the agent receives the timeout)Proxy_buffer_size 4k; # Set up the size of the buffer where the proxy server (nginx) saves user header informationProxy_buffers4 32k;               #proxy_buffersBuffer, page average below 32k, so setProxy_busy_buffers_size 64k; buffer size under high load (proxy_buffers)*2)
           proxy_temp_file_write_size 64k;    #Set the cache folder size, greater than this value, which will be passed from the upstream serverClient_max_body_size 10m; allows client requestsMaximum number of bytes per fileClient_body_buffer_size 128k; (Buffer proxy buffers the maximum number of bytes requested by the client}}}

Web site has multiple webapp configurations

When a website has more and more functions, it is often necessary to separate some relatively independent modules and maintain them independently. In this way, there are usually multiple webapps.

For example: suppose there are several webapps at www.cnblogs.cn, finance, product and admin. The way these applications are accessed is differentiated by context:

  • www.cnblogs.cn/finance/

  • www.cnblogs.cnproduct/

  • www.cnblogs.cn/admin/

We know that the default port number of HTTP is 80. If you start all three webapp applications on one server at the same time, you can’t use 80 ports. Therefore, the three applications need to bind different port numbers.

Then, the problem arises. When users actually visit www.cnblogs.cn sites, they will visit different webapps without the corresponding port number. So again, you need to use the reverse proxy for processing.

Configuration is not difficult, let’s see how to do it.

http {
   #Some basic configurations are omitted hereUpstream product_server{Server www.cnblogs.cn:8081;
   }
   
   upstream admin_server{
       server www.cnblogs.cn:8082;
   }
   
   upstream finance_server{
       server www.cnblogs.cn:8083;
   }

   server {
       #Some basic configurations are omitted here# Default server pointing to productLocation/ {
           proxy_pass http://product_server;
       }

       location /product/{
           proxy_pass http://product_server;
       }

       location /admin/ {
           proxy_pass http://admin_server;
       }
       
       location /finance/ {
           proxy_pass http://finance_server;
       }
   }
}

httpsReverse proxy configuration

Some sites with high security requirements may use HTTPS (a secure HTTP protocol using SSL communication standards).

There are no popular HTTP protocols and SSL standards. However, there are several things you need to know to configure HTTPS with nginx:

  • HTTPS The fixed port number is 443, which is different from the 80 port of HTTP.

  • SSL Standards need to introduce security certificates, so in nginx.conf you need to specify certificates and their corresponding keys

Others are basically the same as HTTP reverse proxies, except that they are configured differently in the Server section.

#HTTPThe serverServer {Monitor 443 ports. 443 is a well-known port number. It is mainly used in HTTPS protocol.Listen443 ssl;

     #Define access using www.xx.comServer_name www.cnblogs.cn;#ssl Certificate File Location (Common Certificate File Format: CRT/pem)
     ssl_certificate      cert.pem;
     #sslCertificate key locationSsl_certificate_key cert.key;# SSL configuration parameters (optional configuration)Ssl_session_cache sharEd:SSL:1m;Ssl_session_timeout 5m;# Digital Signature, using MD5 hereSsl_ciphers HIGH:!aNULL:!MD5;
     ssl_prefer_server_ciphers  on;

     location / {
         root   /root;
         index  index.html index.htm;
     }
 }

Static site configuration

Sometimes we need to configure static sites (that is, HTML files and a bunch of static resources).

For example, if all the static resources are in the / APP / dist directory, we just need to specify the home page and the host of the site in nginx.conf.

The configuration is as follows:

worker_processes  1;

events {
   worker_connections  1024;
}

http {
   include       mime.types;
   default_type  application/octet-stream;
   sendfile        on;
   keepalive_timeout  65;

   gzip on;
   gzip_types text/plain application/x-javascript text/css application/xml text/javascript application/javascript image/jpeg image/gif image/png;
   gzip_vary on;

   server {
       listen       80;
       server_name  static.zp.cn;

       location / {
           root /app/dist;
           index index.html;
           #Forward any request to index. HTML}}}

Then, add HOST:

127.0.0.1 static.zp.cn,At this point, by accessing static.zp.cn in the local browser, you can access the static site.

Cross domain solution

web In domain development, front-end and back-end separation mode is often used. In this mode, the front end and the back end are independent web applications, such as Java programs at the back end and React or Vue applications at the front end. For more information, see this article, “What is cross-domain, and the solution?”Case.

When independent web apps access each other, there is bound to be cross-domain problems. There are generally two ways to solve cross-domain problems:

CORS

Set the HTTP response header on the back-end server, and add the domain name you need to run and access to into Access-Control-Allow-Origin.

jsonp

The back end constructs the JSON data according to the request and returns it. The front end uses jsonp to cross the domain.

These two ideas are not discussed in this paper.

It should be noted that according to the first idea, nginx also provides a solution to cross-domain problems.

Example: www.cnblogs.cn website is composed of a front-end app and a back-end app. The front port number is 9000 and the back port number is 8080.

When the front-end and back-end interact with http, the request is rejected because of cross-domain problems. Let’s see how nginx works out:

First, set CORS in the enable-cors.conf file:

# allow origin list
set $ACAO '*';

# set single origin
if ($http_origin ~* (www.cnblogs.cn)$) {
 set $ACAO $http_origin;
}

if ($cors = "trueget") {
   add_header 'Access-Control-Allow-Origin' "$http_origin";
   add_header 'Access-Control-Allow-Credentials' 'true';
   add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
   add_header 'Access-Control-Allow-Headers' 'DNT,X-Mx-ReqToken,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type';
}

if ($request_method = 'OPTIONS') {
 set $cors "${cors}options";
}

if ($request_method = 'GET') {
 set $cors "${cors}get";
}

if ($request_method = 'POST') {
 set $cors "${cors}post";
}

Next, include enable-cors.conf in your server to introduce cross-domain configuration:

# ----------------------------------------------------
# This file configures fragments for the project nginx# can be included directly in nginx config (recommended)# or copy to existing nginx, self-configuringWww.cnblogs.comDomain names need to be configured with DNS hosts# where the API opens cors, with another configuration file in this directoryWei----------------------------------------------------
upstream front_server{
 server www.cnblogs.cn:9000;
}
upstream api_server{
 server www.cnblogs.cn:8080;
}

server {
 listen       80;
 server_name  www.cnblogs.cn;

 location ~ ^/api/ {
   include enable-cors.conf;
   proxy_pass http://api_server;
   rewrite "^/api/(.*)$" /$1 break;
 }

 location ~ ^/ {
   proxy_pass http://front_server;
 }
}

So far, it’s done.

Leave a Reply

Your email address will not be published. Required fields are marked *