9
头图

Background of the project

In the back-end microservices, it is common to expose a unified gateway entrance to the outside world, so that the entire system service has a unified entrance and exit, and converges the service; however, in the front-end this kind of unified gateway entrance and exit services are compared Rarely, each application provides independent services. At present, the industry also uses micro front-end applications for application scheduling and communication. Among them, nginx forwarding is one of the solutions. In order to converge the entrance and exit of the front-end application, the project needs to be included. The public network ports are limited, so in order to better access more applications, here is the idea of a back-end gateway to implement a front-end gateway proxy forwarding solution. This article aims to implement this front-end gateway. Some thoughts and pits in the process are summarized and summarized, and I hope to provide some solutions to students who have related scenarios.

Architecture design

nameeffectRemark
Gateway layerUsed to carry front-end traffic as a unified entranceIt can be carried by front-end routing or back-end routing, which is mainly used for traffic segmentation, or a single application can be placed here as a hybrid of routing and scheduling
Application layerIt is used to deploy various front-end applications, not limited to the framework. The communication between various applications can be sent to the gateway via http or the gateway, provided that the gateway layer has the function of receiving schedulingNot limited to the front-end framework and version, each application has been deployed separately, and communication between each other requires communication between http, or communication between containerized deployments such as k8s
Interface layerIt is used to obtain data from the back-end. Due to the different forms of back-end deployment, there may be different microservice gateways, or there may be a separate third-party interface, or it may be in the form of BFF interface such as node.js.For a unified shared interface form, it can be passed on to the gateway layer for proxy forwarding

plan selection

At present, the business logic of the project application system is relatively complicated, and it is not easy to be unified in the form of micro-front-end similar to SingleSPA. Therefore, the micro-front-end network with nginx as the main technical form is selected for construction. In addition, subsequent access is required. When multiple third-party applications are made into an iframe form, it will involve problems between networks. Due to business forms and limited public network ports, it is necessary to design a set of virtual ports capable of 1:n. Therefore, we finally chose to use nginx as the main gateway for traffic and application segmentation.

LevelplanRemark
Gateway layerUse an nginx as the public network traffic entrance, and use the path to segment different sub-applicationsThe parent nginx application as the front-end application entry requires a load balancing process. Here, k8s load balancing is used to do it. Three copies are configured. If a pod hangs, you can use the k8 mechanism to pull it up.
Application layerThere are many different nginx applications. Because of the path segmentation, we need to deal with the resource orientation. For details, see the next part of the pit case.Here we use docker to mount the directory for processing
Interface layerAfter multiple different nginx applications have reverse proxyed the interface, the interface cannot be forwarded here because the browser is forwarding it. The front-end code needs to be processed here. For details, see the pit case.Follow-up will configure ci, cd to build scaffolding, and configure some common front-end scaffolding such as: vue-cli, cra, umi access plug-in package

Stepping on the pit case

Static resource 404 error

[Case description] We found that normal html resources can be located after proxying the path, but for js, css resources, etc., there will be 404 errors that cannot be found

[Case Analysis] Since most of the current applications are single-page applications, and single-page applications are mainly operated by js to operate dom, for mv* frameworks, it usually routes and intercepts some data at the front end. Corresponding template engine processing requires relative path search for resource search

[Solution] Our project construction is mainly deployed through docker+k8s, so here we think of placing the resource path in a path directory, and this directory path needs to be consistent with the name of the parent nginx application forwarding path, that is Say that the child application needs to register a routing information in the parent application, and then you can make location changes through the service registration method, etc.

Parent application nginx configuration

{
    "rj": {
        "name": "xxx应用",
        "path: "/rj/"
    }
}
server {
    location /rj/ {
        proxy_pass http://ip:port/rj/;
    }
}

Sub application

FROM xxx/nginx:1.20.1
COPY ./dist /usr/share/nginx/html/rj/

Interface proxy 404 error

[Case Description] After processing the static resources, we requested an interface in our parent application, and found that the interface also had a 404 query error

[Case Analysis] Since the front-end and back-end projects are currently separated, the back-end interface is usually implemented through the direction proxy of the child application nginx, so that the parent application’s nginx forwards it because there is no proxy interface address in the parent application’s nginx , So there will be no resources

[Solution] There are two solutions. One is to use the parent application to proxy the backend interface address. In this case, a problem will arise if the names of the child application proxy are the same, and the interface does not only come from a microservice. Or there will be different static proxy and BFF forms, so that the complexity of the parent application will be uncontrollable; the other is to change the front-end request path in the child application to an agreed path, such as Plus the agreed path in the service registration for isolation. Here we have both. For the access of our self-developed project, we will configure the unified gateway and static resource forwarding agent in the multiple application, and agree on the path name with the sub-application. For example, the back-end gateway is unified with /api/ For forwarding, for the access of non-self-developed projects, we currently need to access the application to modify the interface. In the future, we will provide a plug-in library for common scaffolding api modification solutions, such as vue-cli/cra/umi, etc., The scaffolding application developed by the third-party team needs to be manually changed, but in general, the custom scaffolding team usually has a unified configuration front-end request path. For old applications such as projects built with jq, they need to be manually changed.

Here I use the solution built by vue-cli3 to demonstrate:

// config
export const config = {
    data_url: '/rj/api'
};
// 具体接口
// 通常这里会做一些axios的路由拦截处理等
import request from '@/xxx';
// 这里对baseUrl做了统一入口,只需更改这里的baseurl入口即可
import { config } from '@/config';

// 具体接口
export const xxx = (params) => 
    request({
        url: config.data_url + '/xxx'
    })

Source code analysis

As a lightweight and high-performance web server, nginx's architecture and design are of great reference significance, and it has certain guiding ideas for the design of node.js or other web frameworks.

Nginx is written in C language, so it combines the entire architecture through modules, including common ones such as: HTTP module, event module, configuration module, and core module. The core module is used to schedule and load other modules to achieve Interaction between modules

Here we mainly need to forward the application through the proxy_pass in the location, so let's take a look at the processing of proxy_pass in the proxy module

ngx_http_proxy_module

static char *ngx_http_proxy_pass(ngx_conf_t *cf, ngx_command_t *cmd, void *conf);

static ngx_command_t ngx_http_proxy_commands[] = {
    {
        ngx_string("proxy_pass"),
        NGX_HTTP_LOC_CONF | NGX_HTTP_LIF_CONF | NGX_HTTP_LMT_CONF | NGX_CONF_TAKE1,
        ngx_http_proxy_pass,
        NGX_HTTP_LOC_CONF_OFFSET,
        0,
        NULL
    }
};


static char *
ngx_http_proxy_pass(ngx_conf_t *cf, ngx_command_t *cmd, void *conf)
{
    ngx_http_proxy_loc_conf_t *plcf = conf;

    size_t                      add;
    u_short                     port;
    ngx_str_t                  *value, *url;
    ngx_url_t                   u;
    ngx_uint_t                  n;
    ngx_http_core_loc_conf_t   *clcf;
    ngx_http_script_compile_t   sc;

    if (plcf->upstream.upstream || plcf->proxy_lengths) {
        return "is duplicate";
    }

    clcf = ngx_http_conf_get_module_loc_conf(cf, ngx_http_core_module);

    clcf->handler = ngx_http_proxy_handler;

    if (clcf->name.len && clcf->name.data[clcf->name.len - 1] == '/') {
        clcf->auto_redirect = 1;
    }

    value = cf->args->elts;

    url = &value[1];

    n = ngx_http_script_variables_count(url);

    if (n) {

        ngx_memzero(&sc, sizeof(ngx_http_script_compile_t));

        sc.cf = cf;
        sc.source = url;
        sc.lengths = &plcf->proxy_lengths;
        sc.values = &plcf->proxy_values;
        sc.variables = n;
        sc.complete_lengths = 1;
        sc.complete_values = 1;

        if (ngx_http_script_compile(&sc) != NGX_OK) {
            return NGX_CONF_ERROR;
        }

#if (NGX_HTTP_SSL)
        plcf->ssl = 1;
#endif

        return NGX_CONF_OK;
    }

    if (ngx_strncasecmp(url->data, (u_char *) "http://", 7) == 0) {
        add = 7;
        port = 80;

    } else if (ngx_strncasecmp(url->data, (u_char *) "https://", 8) == 0) {

#if (NGX_HTTP_SSL)
        plcf->ssl = 1;

        add = 8;
        port = 443;
#else
        ngx_conf_log_error(NGX_LOG_EMERG, cf, 0,
                           "https protocol requires SSL support");
        return NGX_CONF_ERROR;
#endif

    } else {
        ngx_conf_log_error(NGX_LOG_EMERG, cf, 0, "invalid URL prefix");
        return NGX_CONF_ERROR;
    }

    ngx_memzero(&u, sizeof(ngx_url_t));

    u.url.len = url->len - add;
    u.url.data = url->data + add;
    u.default_port = port;
    u.uri_part = 1;
    u.no_resolve = 1;

    plcf->upstream.upstream = ngx_http_upstream_add(cf, &u, 0);
    if (plcf->upstream.upstream == NULL) {
        return NGX_CONF_ERROR;
    }

    plcf->vars.schema.len = add;
    plcf->vars.schema.data = url->data;
    plcf->vars.key_start = plcf->vars.schema;

    ngx_http_proxy_set_vars(&u, &plcf->vars);

    plcf->location = clcf->name;

    if (clcf->named
#if (NGX_PCRE)
        || clcf->regex
#endif
        || clcf->noname)
    {
        if (plcf->vars.uri.len) {
            ngx_conf_log_error(NGX_LOG_EMERG, cf, 0,
                               "\"proxy_pass\" cannot have URI part in "
                               "location given by regular expression, "
                               "or inside named location, "
                               "or inside \"if\" statement, "
                               "or inside \"limit_except\" block");
            return NGX_CONF_ERROR;
        }

        plcf->location.len = 0;
    }

    plcf->url = *url;

    return NGX_CONF_OK;
}

ngx_http

static ngx_int_t
ngx_http_add_addresses(ngx_conf_t *cf, ngx_http_core_srv_conf_t *cscf,
    ngx_http_conf_port_t *port, ngx_http_listen_opt_t *lsopt)
{
    ngx_uint_t             i, default_server, proxy_protocol;
    ngx_http_conf_addr_t  *addr;
#if (NGX_HTTP_SSL)
    ngx_uint_t             ssl;
#endif
#if (NGX_HTTP_V2)
    ngx_uint_t             http2;
#endif

    /*
     * we cannot compare whole sockaddr struct's as kernel
     * may fill some fields in inherited sockaddr struct's
     */

    addr = port->addrs.elts;

    for (i = 0; i < port->addrs.nelts; i++) {

        if (ngx_cmp_sockaddr(lsopt->sockaddr, lsopt->socklen,
                             addr[i].opt.sockaddr,
                             addr[i].opt.socklen, 0)
            != NGX_OK)
        {
            continue;
        }

        /* the address is already in the address list */

        if (ngx_http_add_server(cf, cscf, &addr[i]) != NGX_OK) {
            return NGX_ERROR;
        }

        /* preserve default_server bit during listen options overwriting */
        default_server = addr[i].opt.default_server;

        proxy_protocol = lsopt->proxy_protocol || addr[i].opt.proxy_protocol;

#if (NGX_HTTP_SSL)
        ssl = lsopt->ssl || addr[i].opt.ssl;
#endif
#if (NGX_HTTP_V2)
        http2 = lsopt->http2 || addr[i].opt.http2;
#endif

        if (lsopt->set) {

            if (addr[i].opt.set) {
                ngx_conf_log_error(NGX_LOG_EMERG, cf, 0,
                                   "duplicate listen options for %V",
                                   &addr[i].opt.addr_text);
                return NGX_ERROR;
            }

            addr[i].opt = *lsopt;
        }

        /* check the duplicate "default" server for this address:port */

        if (lsopt->default_server) {

            if (default_server) {
                ngx_conf_log_error(NGX_LOG_EMERG, cf, 0,
                                   "a duplicate default server for %V",
                                   &addr[i].opt.addr_text);
                return NGX_ERROR;
            }

            default_server = 1;
            addr[i].default_server = cscf;
        }

        addr[i].opt.default_server = default_server;
        addr[i].opt.proxy_protocol = proxy_protocol;
#if (NGX_HTTP_SSL)
        addr[i].opt.ssl = ssl;
#endif
#if (NGX_HTTP_V2)
        addr[i].opt.http2 = http2;
#endif

        return NGX_OK;
    }

    /* add the address to the addresses list that bound to this port */

    return ngx_http_add_address(cf, cscf, port, lsopt);
}

static ngx_int_t
ngx_http_add_addrs(ngx_conf_t *cf, ngx_http_port_t *hport,
    ngx_http_conf_addr_t *addr)
{
    ngx_uint_t                 i;
    ngx_http_in_addr_t        *addrs;
    struct sockaddr_in        *sin;
    ngx_http_virtual_names_t  *vn;

    hport->addrs = ngx_pcalloc(cf->pool,
                               hport->naddrs * sizeof(ngx_http_in_addr_t));
    if (hport->addrs == NULL) {
        return NGX_ERROR;
    }

    addrs = hport->addrs;

    for (i = 0; i < hport->naddrs; i++) {

        sin = (struct sockaddr_in *) addr[i].opt.sockaddr;
        addrs[i].addr = sin->sin_addr.s_addr;
        addrs[i].conf.default_server = addr[i].default_server;
#if (NGX_HTTP_SSL)
        addrs[i].conf.ssl = addr[i].opt.ssl;
#endif
#if (NGX_HTTP_V2)
        addrs[i].conf.http2 = addr[i].opt.http2;
#endif
        addrs[i].conf.proxy_protocol = addr[i].opt.proxy_protocol;

        if (addr[i].hash.buckets == NULL
            && (addr[i].wc_head == NULL
                || addr[i].wc_head->hash.buckets == NULL)
            && (addr[i].wc_tail == NULL
                || addr[i].wc_tail->hash.buckets == NULL)
#if (NGX_PCRE)
            && addr[i].nregex == 0
#endif
            )
        {
            continue;
        }

        vn = ngx_palloc(cf->pool, sizeof(ngx_http_virtual_names_t));
        if (vn == NULL) {
            return NGX_ERROR;
        }

        addrs[i].conf.virtual_names = vn;

        vn->names.hash = addr[i].hash;
        vn->names.wc_head = addr[i].wc_head;
        vn->names.wc_tail = addr[i].wc_tail;
#if (NGX_PCRE)
        vn->nregex = addr[i].nregex;
        vn->regex = addr[i].regex;
#endif
    }

    return NGX_OK;
}

Summarize

For the front-end gateway, not only can the gateway be separated and layered separately, but also a SingleSPA-like solution can be used to process the gateway and adjust the application using the front-end routing, so as to realize the control of the single-page application, but separate it separately After each sub-application, the advantage of this is that each sub-application can communicate with each other through the parent application or bus, as well as the sharing of public resources and the isolation of their own private resources. For this project, the current business format is more suitable Use a separate gateway layer to achieve, while using nginx, you can achieve a smaller configuration to access each application and achieve the convergence of the front-end entrance. Here, scaffolding will be provided for the construction of ci and cd processes to facilitate application developers to access the construction Deployment, so as to achieve the effect of engineering. For operations that can be replicated in multiples, we should all think of using engineering means to solve them, instead of blindly investing in manual labor. After all, machines are better at handling a single constant batch and stable production. Work out, encourage each other! ! !

refer to


维李设论
1.1k 声望4k 粉丝

专注大前端领域发展