See an article use and precautions , there is a description of initiating remote calls

Immediately after rpc is the value of the class variable name when the microservice is defined, which is the name of the microservice, followed by the rpc method, using call_async as an asynchronous call, and when calling result_async.result(), it will wait for the asynchronous task to return the result. It should be noted, is created when you run ClusterRpcProxy (config) and the connection queue, the operation is relatively time-consuming, if there are a large number of micro-service calls should not be repeated to create a connection, all calls should be completed within a block of statements . The result of an asynchronous call can only be obtained within the statement block, that is, calling .result() to wait for the result. The connection is disconnected outside the statement block and cannot be obtained.

It probably means that ClusterRpcProxy is frequently instantiated. Combined with the strange phenomenon I saw in the RabbitMQ background, I feel it is necessary to study it:

The following are common nameko calls

api.py

from fastapi import FastAPI
from rpc import (
    upload_service_rpc
)

app = FastAPI()

@app.get('/')
async def root():
    return {"message": "Hello World"}

@app.post('/upload/')
def upload(data: UploadRequestBody):
    logger.debug(data.json(ensure_ascii=False))

    success: bool = upload_service_rpc(data)  # 此处发起 rpc 调用
    return {
        'status': success
    }

upload_service_rpc method is a function wrapper nameko provided by ClusterRpcProxy

rpc.py

from nameko.standalone.rpc import ClusterRpcProxy
import settings
from schemas import (
    UploadRequestBody,
)
from loguru import logger

config = {
    'AMQP_URI': f'amqp://{settings.AMQP_URI.RABBIT_USER}:'
                f'{settings.AMQP_URI.RABBIT_PASSWORD}@{settings.AMQP_URI.RABBIT_HOST}:'
                f'{settings.AMQP_URI.RABBIT_PORT}/{settings.AMQP_URI.RABBIT_VHOST}'
}

def upload_service_rpc(data: UploadRequestBody) -> bool:
    """ 给 fatapi 暴露的 rpc 接口 """
    with ClusterRpcProxy(config) as cluster_rpc:   # 通过 ClusterRpcProxy 发起 RPC 请求
        success: bool = cluster_rpc.console_service.upload(
            data=data.json(ensure_ascii=False)
        )
        return success

But the wording of the above, looks perfect, but nameko realize that each instantiation of ClusterRpcProxy will be in RabbitMQ create a new in queue , if every time we rpc request wants above code as frequent instances of ClusterRpcProxy cause a lot of Time wasted creating queue .

The figure is RabbmitMQ screenshot of the admin interface, you can see when multiple requests initiated when there will be a large number of rpc.reply-standalone_rpc_proxy_{routing_key} format queue

These rpc.reply-standalone_rpc_proxy_{routing_key} queues will be closed after a few seconds after there are no messages and will not exist forever

Next, modify the code:

api.py

import settings
from loguru import logger
from fastapi import FastAPI
from schemas import (
    UploadRequestBody
)
from rpc import (
    init_rpc_proxy
)

app = FastAPI()


rpc_proxy = init_rpc_proxy()    #  把 rpc_proxy 对象变成一个全局变量,生命周期伴随整个程序


@app.post('/upload/')
def upload(data: UploadRequestBody):
    logger.debug(data.json(ensure_ascii=False))

    success: bool = rpc_proxy.console_service.upload(  # 执行 rpc 调用 console_service 的 upload 方法
        data=data.json(ensure_ascii=False)
    )

    return {
        'status': success
    }


rpc.py

# coding=utf-8

from nameko.standalone.rpc import ClusterRpcProxy
import settings
from schemas import (
    UploadRequestBody,
)
from loguru import logger

config = {
    'AMQP_URI': f'amqp://{settings.AMQP_URI.RABBIT_USER}:'
                f'{settings.AMQP_URI.RABBIT_PASSWORD}@{settings.AMQP_URI.RABBIT_HOST}:'
                f'{settings.AMQP_URI.RABBIT_PORT}/{settings.AMQP_URI.RABBIT_VHOST}'
}


def init_rpc_proxy():
    return ClusterRpcProxy(config) # init_rpc_proxy 只负责返回对象,不执行代码

But when we execute the new code above, we get an error

AttributeError: 'ClusterRpcProxy' object has no attribute 'console_service'

why? The reason is that ClusterRpcProxy class __enter__ methods, but we do not use with context manager, they will not perform __enter__ content methods, and secret lies in __enter__ approach, let's look at __enter__ what methods have it!

nameko/standalone/rpc.py

class StandaloneProxyBase(object):   # StandaloneProxyBase 是 ClusterRpcProxy 的父类
    class ServiceContainer(object):
        """ Implements a minimum interface of the
        :class:`~containers.ServiceContainer` to be used by the subclasses
        and rpc imports in this module.
        """
        service_name = "standalone_rpc_proxy"

        def __init__(self, config):
            self.config = config
            self.shared_extensions = {}

    class Dummy(Entrypoint):
        method_name = "call"

    _proxy = None

    def __init__(
        self, config, context_data=None, timeout=None,
        reply_listener_cls=SingleThreadedReplyListener
    ):
        container = self.ServiceContainer(config)

        self._worker_ctx = WorkerContext(
            container, service=None, entrypoint=self.Dummy,
            data=context_data)
        self._reply_listener = reply_listener_cls(
            timeout=timeout).bind(container)

    def __enter__(self):
        return self.start()

    def __exit__(self, tpe, value, traceback):
        self.stop()

    def start(self):
        self._reply_listener.setup()
        return self._proxy

    def stop(self):
        self._reply_listener.stop()

class ClusterRpcProxy(StandaloneProxyBase):
    def __init__(self, *args, **kwargs):
        super(ClusterRpcProxy, self).__init__(*args, **kwargs)
        self._proxy = ClusterProxy(self._worker_ctx, self._reply_listener)

StandaloneProxyBase is the parent class of ClusterRpcProxy. You can see that the __enter__ method executes return self.start() , and the start method returns return self._proxy instead of the common return self , so this leads to our previous error.

Once you know the cause of the problem, you can fix it quickly!

api.py

import settings
from loguru import logger
from fastapi import FastAPI
from schemas import (
    UploadRequestBody
)
from rpc import (
    init_rpc_proxy
)

app = FastAPI()


_rpc_proxy = init_rpc_proxy()  # 区分两个 _rpc_proxy 和 rpc_proxy
rpc_proxy = _rpc_proxy.start()


@app.post('/upload/')
def upload(data: UploadRequestBody):
    logger.debug(data.json(ensure_ascii=False))

    # success: bool = upload_service_rpc2(data)
    success: bool = rpc_proxy.console_service.upload( # 使用 rpc_proxy 调用 rpc 方法
        data=data.json(ensure_ascii=False)
    )

    return {
        'status': success
    }

rpc.py

# coding=utf-8

from nameko.standalone.rpc import ClusterRpcProxy
import settings
from schemas import (
    UploadRequestBody,
)
from loguru import logger

config = {
    'AMQP_URI': f'amqp://{settings.AMQP_URI.RABBIT_USER}:'
                f'{settings.AMQP_URI.RABBIT_PASSWORD}@{settings.AMQP_URI.RABBIT_HOST}:'
                f'{settings.AMQP_URI.RABBIT_PORT}/{settings.AMQP_URI.RABBIT_VHOST}'
}


def init_rpc_proxy():
    return ClusterRpcProxy(config)

Well, let's look at the speed difference before and after:

Test code:

import requests

data = {
    # 隐藏了这部分内容
}

for i in range(20):
    response = requests.post('http://localhost:63000/upload/', json=data)
    print(response.status_code, response.text)

Run the loop 20 times:

before fixing:

─➤  time python test_api.py
200 {"status":true}
200 {"status":true}
200 {"status":true}
200 {"status":true}
200 {"status":true}
200 {"status":true}
200 {"status":true}
200 {"status":true}
200 {"status":true}
200 {"status":true}
200 {"status":true}
200 {"status":true}
200 {"status":true}
200 {"status":true}
200 {"status":true}
200 {"status":true}
200 {"status":true}
200 {"status":true}
200 {"status":true}
200 {"status":true}
python test_api.py  0.14s user 0.05s system 1% cpu 14.696 total

After modification:

─➤  time python test_api.py
200 {"status":true}
200 {"status":true}
200 {"status":true}
200 {"status":true}
200 {"status":true}
200 {"status":true}
200 {"status":true}
200 {"status":true}
200 {"status":true}
200 {"status":true}
200 {"status":true}
200 {"status":true}
200 {"status":true}
200 {"status":true}
200 {"status":true}
200 {"status":true}
200 {"status":true}
200 {"status":true}
200 {"status":true}
200 {"status":true}
python test_api.py  0.14s user 0.05s system 2% cpu 7.271 total

Because it avoids creating a queue for each RPC request, the speed is greatly improved.

14 seconds compared to 7 seconds, double the speed!

thread safety

But it should be noted that ClusterProxy is not concurrently safe, as you can see from the comments in the source code, A single-threaded RPC proxy to a cluster of services.

nameko/standalone/rpc.py

class ClusterProxy(object):
    """
    A single-threaded RPC proxy to a cluster of services. Individual services
    are accessed via attributes, which return service proxies. Method calls on
    the proxies are converted into RPC calls to the service, with responses
    returned directly.

    Enables services not hosted by nameko to make RPC requests to a nameko
    cluster. It is commonly used as a context manager but may also be manually
    started and stopped.

    This is similar to the service proxy, but may be uses a single reply queue
    for calls to all services, where a collection of service proxies would have
    one reply queue per proxy.

    *Usage*

    As a context manager::

        with ClusterRpcProxy(config) as proxy:
            proxy.service.method()
            proxy.other_service.method()

    The equivalent call, manually starting and stopping::

        proxy = ClusterRpcProxy(config)
        proxy = proxy.start()
        proxy.targetservice.method()
        proxy.other_service.method()
        proxy.stop()

    If you call ``start()`` you must eventually call ``stop()`` to close the
    connection to the broker.

    You may also supply ``context_data``, a dictionary of data to be
    serialised into the AMQP message headers, and specify custom worker
    context class to serialise them.

    When the name of the service is not legal in Python, you can also
    use a dict-like syntax::

        with ClusterRpcProxy(config) as proxy:
            proxy['service-name'].method()
            proxy['other-service'].method()

    """

Therefore, it is not feasible to use the global singleton ClusterProxy concurrently in a multi-threaded or coroutine environment. At this time, we can refer to the connection pool of the database and create a multi-threaded concurrent safe connection pool of ClusterProxy.

The reference code is as follows:

rpc.py

# coding=utf-8

from nameko.standalone.rpc import ClusterRpcProxy
import settings
import threading
import queue
from nameko.standalone.rpc import (ClusterProxy, ClusterRpcProxy)

config = {
    'AMQP_URI': f'amqp://{settings.AMQP_URI.RABBIT_USER}:'
                f'{settings.AMQP_URI.RABBIT_PASSWORD}@{settings.AMQP_URI.RABBIT_HOST}:'
                f'{settings.AMQP_URI.RABBIT_PORT}/{settings.AMQP_URI.RABBIT_VHOST}'
}


def synchronized(func):

    func.__lock__ = threading.Lock()

    def lock_func(*args, **kwargs):
        with func.__lock__:
            return func(*args, **kwargs)
    return lock_func


class RpcProxyPool:
    queue = queue.Queue()

    @synchronized
    def get_connection(self) -> ClusterProxy:
        if self.queue.empty():
            conn = self.create_connection()
            self.queue.put(conn)
        return self.queue.get()

    def init_rpc_proxy(self):
        return ClusterRpcProxy(config)

    @synchronized
    def create_connection(self) -> ClusterProxy:
        _rpc_proxy: ClusterRpcProxy = self.init_rpc_proxy()
        rpc_proxy: ClusterProxy = _rpc_proxy.start()

        return rpc_proxy

    @synchronized
    def put_connection(self, conn: ClusterProxy) -> bool:
        if isinstance(conn, ClusterProxy):
            self.queue.put(conn)
            return True
        return False

api.py

from loguru import logger
from fastapi import FastAPI
from schemas import (
    AddStruct
)
from rpc import (
    RpcProxyPool
)

app = FastAPI(
)

pool = RpcProxyPool()


@app.get('/')
async def root():
    return {"message": "Hello World"}


@app.post('/upload/')
def upload(data: AddStruct):
    logger.debug(data.dict())

    rpc_proxy = pool.get_connection()

    c: int = rpc_proxy.add_service.add(
        data.a, data.b
    )
    pool.put_connection(rpc_proxy)

    return {
        'r': c
    }

Wrote a RpcProxyPool , unbounded connection pool

The flow chart is roughly as follows, although the drawing is very bad

图片.png


universe_king
3.4k 声望678 粉丝