Article From:


nginxIt is a free, open source, high-performance HTTP server and reverse proxy server; it is also a IMAP, POP3, SMTP proxy server; nginx can be used as a HTTP server for web site release processing, and nginx can be used as a reverse proxy.Carry out the implementation of load balancing.

This is a brief introduction to nginx in three aspects.

  • Reverse proxy
  • load balancing
  • nginxCharacteristic

1. Reverse proxy

On the agent

When it comes to agency, we must first clarify a concept, that is, agency is a representative and a channel.

At this time, it is designed to two roles, one is the agent role, one is the target role, the agent role through which the agent visits the target role to complete a number of tasks is called agent operation process; like the monopoly store in life to the Adidas store to buy a pair of shoes, this store is the agent.The acting role is the adidas manufacturer, and the target role is the user.


Forward agent

Before we say the reverse proxy, we look at the forward agent first, and the forward agent is the most common agent mode. We will explain the positive agent’s processing mode in two aspects, and explain what is the positive agent from the software and the living aspects.

In today’s network environment, if we need to visit some foreign websites because of the technical needs, you will find that there is no way to access a web site abroad by the browser. At this point, you may all use a FQ to visit, and the way FQ is mainly to find a visit abroad.The proxy server of the web site, we send the request to the proxy server, the proxy server to visit the foreign website, and then transfer the data to us.

The above agent mode is called a forward agent, and the maximum characteristic of the forward agent is that the client is very clear about the server address to be accessed; the server is only clear which proxy server comes from, and does not know which specific client comes from; the positive proxy mode is shielded or the real client information is hidden.


Reverse proxy

Understand what is a positive agent, and we continue to look at the way to deal with the reverse agent. For example, for example, a web site in my big day, the number of visits to the website has exploded every day, and a single server is far from satisfying the growing desire of the people to buy.Word: distributed deployment; that is, the problem of accessing the number of access restrictions by deploying multiple servers; most of the functions of a certain treasure site are implemented directly using nginx for reverse proxy, and the name of a tall name by encapsulating nginx and other components: TengineInterested children’s shoes can visit the official website of Tengine to see the specific information:
So what is the way the reverse agent realizes the distributed cluster operation? Let’s look at a schematic diagram first.


Through the above illustration, you can see clearly, multiple clients send the request to the server. After the nginx server is received, it is distributed to the back end business processing server in accordance with certain rules. At this point, the source of the request is the client, but the request is specific.The server processing is not clear. Nginx acts as a reverse agent.

Reverse proxy is mainly used for the distributed deployment of server cluster, and the reverse proxy conceals the information of the server.

Project scene

Normally, when we operate in the actual project, the forward agent and reverse proxy are likely to exist in an application scene, and the proxy proxy client requests to access the target server. The target server is a reverse single profit server, and the reverse proxy has multiple real business processing servers. ConcreteThe topology diagram is as follows:

2. load balancing

We have identified the concept of the so-called proxy server, and then nginx plays the role of the reverse proxy server, and what rules do it request to distribute? Can the rules of distribution be controlled if the application scenarios are not used?

The number of requests received by the client and the nginx reverse proxy server mentioned here is what we call the load.

The rule that the number of requests is distributed to different servers according to certain rules is an equilibrium rule.

So, the process of distributing the requests received by the server according to rules is called load balancing.

Load balance in the actual project operation process, there are two kinds of hardware load balancing and software load balancing. The hardware load balance is also called hard load, such as F5 load balance, high cost and expensive cost, but the stability and security of data is very good, such as China Mobile China UnicomMore companies choose to operate with hard loads; more companies choose to use software load balancing for cost reasons, and software load balancing is a message queuing mechanism that combines existing technology with host hardware.


nginxThe supported load balancing scheduling algorithm is as follows:

  1. weightPolling (default): the received requests are assigned to different back-end servers in order one by one. Even in the process of use, a backend server goes down, and nginx will automatically remove the server from the queue, and the request admissibility will not be affected. In this way, it can be given to different back endsThe server sets a weight value (weight) to adjust the allocation rate of requests on different servers; the greater the weight data, the greater the probability of being allocated to the request; the weight value is mainly adjusted to the different backend server hardware configuration in the actual working environment.

  2. ip_hash:Each request matches the hash results of the IP that initiates the client, and the next fixed IP address of the next client will always access the same backend server, which, to a certain extent, solves the problem of session sharing in the cluster deployment environment.

  3. fair:Intelligent adjustment scheduling algorithm, dynamically based on the request processing time of the back end server to allocate the time to the response, the response time short processing efficiency of the server to the high probability of allocation to the request, the response time long processing efficiency of the low efficiency of the request of the server, a scheduling algorithm combined with the advantages of the former two.However, it is important to note that nginx does not support fair algorithm by default. If you want to use this scheduling algorithm, install upstream_fair module.

  4. url_hash:According to the hash result of the accessed URL, the request is allocated, and the URL of each request points to a certain server on the back end, which can improve the caching efficiency in the case of nginx as a static server. Also note that nginx does not support this scheduling algorithm by default. If you want to use it, you need to install n.Ginx’s hash software package


1. windowsinstall

Official website downloading address:



As shown below, download the corresponding version of the nginx compression package and extract it to your computer to store the folder of the software.


After the unzip is completed, the file directory structure is as follows:



start nginx

1) Double click the nginx.exe on the directory to start the nginx server.

2) The command line is included in the folder, executing the nginx command, and directly starting the nginx server.

D:/resp_application/nginx-1.13.5> nginx


Access to nginx

Open the browser, enter the address: http://localhost, visit the page, the following page shows the success of the visit.


Stop nginx

The command line enters the nginx root directory. The following command is executed to stop the server:

# Nginx server is forced to stop. If there is unprocessed data, discard it.D:/resp_application/nginx-1.13.5> nginx -s stopGracefully stop the nginx server if there is no deal.Data, wait after the processing is completed and stopD:/resp_application/nginx-1.13.5> nginx -s quit


2. ubuntuinstall

According to the installation of normal software, install directly through the following commands:

$ sudo apt-get install nginx

The installation is complete, under the /usr/sbin/ directory is the directory of the nginx command, and in the /etc/nginx/ directory, all nginx configuration files are used to configure the nginx server and load balance and other information.

See if the nginx process is started
$ ps -ef|grep nginx

nginxThe number of corresponding processes will automatically be created according to the number of cores of the current host CPU (the current Ubuntu host is a 2 core 4 thread configuration).

Note: the service process started here is actually 4 processes, because the nginx process comes with a daemon when the process is started to protect the formal process from being terminated; if the daemon returns as soon as the nginx inheritance is terminated, the process will be restarted automatically.

The daemon is usually referred to as the master process, and the business process is called the worker process.

Start the nginx server command

The direct execution of nginx will start the server according to the default configuration file.

$ nginx
Stop the nginx service command

Like the execution process of windows system, there are two ways to stop.

$ nginx -s stop
$ nginx -s quit
Reboot loading

You can also use the command reopen and reload to restart nginx or reload the files.


3. mac osinstall

It is OK to install nginx directly through brew, or download tar.gz compression package.

Install it directly through brew

brew install nginx

After the installation is completed, follow up command operations, server startup, process view, server stop, server restart, and file loading commands are all consistent.

nginxTo configure

nginxIt is a very powerful web server plus reverse proxy server, and also a mail server and so on.

The three core functions used in project use are reverse proxy, load balancing and static server.

The use of these three different functions is closely related to the configuration of the nginx. The configuration information of the nginx server is mainly concentrated in the nginx.conf configuration file, and all the configurable options are roughly divided into the following parts

Copy code
main                                # Global configurationEvents {nginx mode configuration}HttpHTTP settings...Server {the server host configuration...Location {Routing configuration...}Location path {...}Location otherpath {...}}Server {...LOcation {...}}Load balancing configuration for upstream name {...}}
Copy code


As shown in the above configuration file, it consists of 6 parts:

  1. main:Configuration for nginx global information
  2. events:Configuration for the nginx work pattern
  3. http:Some configuration for HTTP protocol information
  4. server:Configuration for server access information
  5. location:Configuration for access routing
  6. upstream:Configuration for load balancing


Look at the following configuration code

Copy code
# user nobody nobody;
worker_processes 2;
# error_log logs/error.log
# error_log logs/error.log notice
# error_log logs/error.log info
# pid logs/
worker_rlimit_nofile 1024;
Copy code

The above configuration is a configuration item stored in the main global configuration module.

  • userUsed to specify nginx worker process running users and user groups, and run by default nobody account.
  • worker_processesSpecify the number of subprocesses that nginx is going to open, and monitor each process that consumes memory during running (usually several M~ dozens of M) is adjusted according to the actual situation, usually an integer multiple of the number of the CPU kernel.
  • error_logDefine the location and output level of the error log file [debug / Info / notice / warn / error / crit]
  • pidThe location of the storage file that is used to specify the process ID
  • worker_rlimit_nofileUsed to specify a process that can open the description of the largest number of files.

event Modular

Dry goods

event {
    worker_connections 1024;
    multi_accept on;
    use epoll;

The above configuration is for some operation configuration of nginx server’s working mode.

  • worker_connections Specify the maximum number of connections that can be received at the same time. Note here that the maximum number of connections is determined by worker processes.
  • multi_accept Configure the specified nginx to receive more connections as much as possible after receiving a new connection notification.
  • use epoll Configuration specifies the thread polling method. If it is linux2.6+, use epoll, if it is BSD, such as Mac, please use Kqueue.


As the web server, the HTTP module is the most core module of the nginx, and the configuration item is also a lot. The project will be set to a lot of actual business scene. It needs to be configured properly according to the hardware information. In the normal case, the default configuration can be used.

Copy code
http {
    # Basic configurationOn the other handSendfile on;Tcp_nopush on;Tcp_nodelay on;Keepalive_timeout 65;TYpes_hash_max_size 2048;Server_tokens off;Server_names_hash_bucket_size 64;SerVer_name_in_redirect off;Include /etc/nginx/mime.types;Default_type application/octet-strEAM;On the other handSSL certificate configurationOn the other handSsl_protocols TLSv1 TLSv1.1 TLSv1.2; Dropping SSLv3, ref: POODLESsl_prefer_server_ciphers on;On the other handLog configurationOn the other handAccess_log /var/log/nginx/accEss.log;Error_log /var/log/nginx/error.log;On the other handGzip compression configurationOn the other handGzip on;GzIp_disable "msie6";Gzip_vary on;Gzip_proxied any;Gzip_comp_level 6;Gzip_buffers 16 8K;Gzip_http_version 1.1;Gzip_types text/plain text/css application/json aPplication/javascriptText/xml application/xml application/xml+rss text/javascript;On the other handDeficiencyQuasi host configurationOn the other handInclude /etc/nginx/conf.d/*.conf;Include /etc/nginx/sites-enabled/*;
Copy code


1) Basic configuration

Copy code
sendfile on:Configuring on allows the sendfile to play a role in putting the file back to the data buffer to complete instead of completing it in the application, which is good for performance improvement.Tc_nopush on: let nginx send all header files in a package instead ofA single hairTcp_nodelay on: let nginx do not cache the data, but a segment of the send, if the data transmission has the requirement of real-time, you can configure it, send a small segment of data to get the return value immediately, but do not abuse itKeepalIve_timeout 10: assign the connection timeout time to the client, and the server will close the connection after this time. Generally, the setting time is shorter, which makes nginx work better.Client_header_timeout 10: setting the request headertimeoutClient_body_timeout 10: sets the timeout time of the request body.Send_timeout 10: Specifies the response time of the client response. If the two interval of the client exceeds this time, the server will close the link.LiMit_conn_zone $binary_remote_addr zone=addr:5m: set parameters for shared memory of various key.The limit_conn addr 100: has the maximum set of keyConnection numberServer_tokens: although it won't make nginx faster, you can close the nginx version prompt on the wrong page, which is good for the security of the site.Include /etc/nginx/mime.typeS: Specifies the instruction to include another file in the current file.Default_type application/octet-stream: Specifies that the file type of the default processing can be binary.Type_hash_max_size 2048Confusing data, affecting three columns of conflict rate, the greater the value of memory, more memory, the hash key conflict rate will be reduced, the speed of retrieval faster; the smaller the value of key, less memory, the higher the conflict rate, the speed of retrieval slows down.
Copy code

2) Log configuration

access_log logs/access.log:Setting a log for storage access recordsError_log logs/error.log: set logs for storing record errors

3) SSLCertificate encryption

ssl_protocols:Instructions are used to start a specific encryption protocol, and the nginx is ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2 by default after the 1.1.13 and 1.0.12 versions, and TLSv1.1 and TLSv1.2 ensure OPenSSL > = 1.0.1, SSLv3 now has many places to use, but there are many vulnerabilities that have been attacked.SSL prefer server ciphers: when we set up a negotiation encryption algorithm, we prefer to use the encryption suite of our server.An encryption suite that is not a client browser

4) Compression configuration

Copy code
gzip It tells nginx to send data in the form of gzip compression. This will reduce the amount of data we send.Gzip_disable disables the gzip function for the specified client. We set it to IE6 or lower version so that our solution can be widely compatible.GZIP_static tells nginx to find out if there are pre gzip processing resources before compressing resources. This requires you to precompress your file (annotated in this example), allowing you to use the maximum compression ratio so that nginx doesn't have to recompress the files.(for more detailed information on gzip_static, please click here).Gzip_proxied allows or prohibits the compression of response flow based on request and response. We set it to any, which means that all requests will be compressed.Gzip_min_lengtH sets the minimum number of bytes to enable compression of the data. If a request is less than 1000 bytes, we'd better not compress it, because compressing these small data will reduce the speed of all processes that process this request.Gzip_comp_level sets the compression level of the data. This is the sameThe level can be any value between 1-9, and 9 is the slowest but the compression ratio is the largest. We set it to 4, which is a compromise setting.Gzip_type sets the data format that needs to be compressed. There are already some examples in the above examples, and you can add more formats.
Copy code

5) File cache configuration

Copy code
open_file_cache The maximum number of caches and the time of cache are specified when the cache is opened. We can set a relatively high maximum time so that we can clear them after they do not move for more than 20 seconds.Open_file_cache_valid is in open_file_cThe time interval between the correct information is specified in the ache.Open_file_cache_min_uses defines the minimum number of files in the open_file_cache when the instruction parameter is inactive.Open_file_cache_Errors specifies whether to cache error information when searching a file, or to add files to the configuration again. We also include server modules, which are defined in different files. If your server module is not in these locations, you have to modify the row to specify the correct location.
Copy code


sreverModule configuration is a sub module in the HTTP module, which is used to define a virtual access host, that is, the configuration information of a virtual server.

Copy code
server {
    listen        80;
    server_name localhost;
    root        /nginx/www;
    index        index.php index.html index.html;
    charset        utf-8;
    access_log    logs/access.log;
    error_log    logs/error.log;
Copy code

The core configuration information is as follows:

  • server:A virtual host configuration, a HTTP can configure multiple server.

  • server_name:Force to specify the IP address or domain name, separated by spaces between multiple configurations.

  • root:Represents the root directory in the entire server virtual host, and the root directory of all web projects in the current host.

  • index:The global home page of a user’s access to the web site

  • charset:Used to set the default encoding format of the webpage configured in the www/ path.

  • access_log:Used to specify access logs and storage paths in the virtual host server.

  • error_log:Used to specify the storage path to access the error log in the virtual host server.


locationModule is the most frequently configured configuration in nginx configuration, which is mainly used to configure routing access information.

In the routing access information configuration, it is associated with various functions such as reverse proxy and load balancing, so the location module is also a very important configuration module.

Basic configuration

location / {
    root    /nginx/www;
    index    index.php index.html index.htm;

location /:Representing the matching access root directory

root:When you specify the access to the root directory, access the web directory of the virtual host.

index:List of resource files displayed by default without specifying access to specific resources

Reverse proxy configuration

Through reverse proxy proxy server access mode, client access transparency is achieved through proxy_set configuration.

location / {
    proxy_pass http://localhost:8888;
    proxy_set_header X-real-ip $remote_addr;
    proxy_set_header Host $http_host;

uwsgiTo configure

wsgiServer configuration access mode in mode

location / {
    include uwsgi_params;
    uwsgi_pass localhost:8888


upstreamThe module is mainly responsible for the load balancing configuration, and the distribution of requests to the back-end server through the default polling scheduling method.

The simple configuration is as follows

Copy code
upstream name {
    server down;
    server max_fails=3;
    server fail_timeout=20s;
    server max_fails=3 fail_timeout=20s;
Copy code

The core configuration information is as follows

  • ip_hash:Specify the request scheduling algorithm, the default is the weight weight polling scheduling, which can be specified.

  • server host:port:List configuration of the distribution server

  • — down:Indicate that the host pause service

  • — max_fails:The maximum number of failures is expressed, the maximum number of failures is exceeded and the service is suspended.

  • — fail_timeout:Indicates that if the request fails, the request is restarted after the specified time is suspended.


windowsThe introduction of the installation and use of the lower nginx

nginxOne of the functions is to start a local server and access the target file by configuring server_name and root directory.

1. Download
Unzip after downloading

Two. Modify the configuration file

nginxThe configuration file is in the nginx-1.8.0\conf\nginx.conf

Copy code
http {
     gzip  on;

    #Static fileServer {Listen 80;Server_name;Location/ {Root G:/source/static_cnblog_com;}}#html fileServer {Listen 80;Server_name localhost;Location / {RootG:/source/html/mobile/dist;Index index.html index.htm;}}}
Copy code


If you can configure multiple server in the above diagram, you can access localhost when you visit it.G:/source/html/mobile/dist  Directory, you can also open gzip, compress HTML

Three. Start up

 Be careful not to double click nginx.exe, which can cause the modification to reboot, stop nginx invalid, and manually close all the nginx processes in the task manager
In the nginx.exe directory, open the command line tool and use the command to start / close / restart nginx.
start nginx : start nginx
nginx -s reload  :Reload the configuration to take effect
nginx -s reopen  :Reopen the log file
nginx -t -c /path/to/nginx.conf Test the correctness of the nginx configuration file

Close nginx:
nginx -s stop  :Fast stop nginx
nginx -s quit  :Complete and orderly stop nginx

If the report is wrong:

bash: nginx: command not found

It is possible that you run the windows command in the Linux command line environment.

If you previously allowed nginx -s reload to make a false report, try./nginx -s reload.

Or use the windows system to run the command line tool


Reference address:

Leave a Reply

Your email address will not be published. Required fields are marked *