Servlet 3.1 Async IO分析

3

Github地址

相关系列文章:

Servlet Async Processing提供了一种异步请求处理的手段(见我的另一篇文章Servlet 3.0 异步处理详解),能够让你将Http thread从慢速处理中释放出来出来其他请求,提高系统的响应度。

但是光有Async Processing是不够的,因为整个请求-响应过程的速度快慢还牵涉到了客户端的网络情况,如果客户端网络情况糟糕,其上传和下载速度都很慢,那么同样也会长时间占用Http Thread使其不能被释放出来。

于是Servlet 3.1提供了Async IO机制,使得从Request中读、往Response里写变成异步动作。

Async Read

我们先来一段客户端上传速度慢的例子,AsyncReadServlet.java

@WebServlet(value = "/async-read", asyncSupported = true)
public class AsyncReadServlet extends HttpServlet {

  @Override
  protected void doPost(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException {

    System.out.println("Servlet thread: " + Thread.currentThread().getName());
    AsyncContext asyncCtx = req.startAsync();
    ServletInputStream is = req.getInputStream();
    is.setReadListener(new ReadListener() {
      private int totalReadBytes = 0;

      @Override
      public void onDataAvailable() {
        System.out.println("ReadListener thread: " + Thread.currentThread().getName());

        try {
          byte buffer[] = new byte[1 * 1024];
          int readBytes = 0;
          while (is.isReady() && !is.isFinished()) {
            int length = is.read(buffer);
            if (length == -1 && is.isFinished()) {
              asyncCtx.complete();
              System.out.println("Read: " + readBytes + " bytes");
              System.out.println("Total Read: " + totalReadBytes + " bytes");
              return;
            }
            readBytes += length;
            totalReadBytes += length;

          }
          System.out.println("Read: " + readBytes + " bytes");

        } catch (IOException ex) {
          ex.printStackTrace();
          asyncCtx.complete();
        }
      }

      @Override
      public void onAllDataRead() {
        try {
          System.out.println("Total Read: " + totalReadBytes + " bytes");
          asyncCtx.getResponse().getWriter().println("Finished");
        } catch (IOException ex) {
          ex.printStackTrace();
        }
        asyncCtx.complete();
      }

      @Override
      public void onError(Throwable t) {
        System.out.println(ExceptionUtils.getStackTrace(t));
        asyncCtx.complete();
      }
    });

  }

}

我们利用curl--limit-rate选项来模拟慢速上传curl -X POST -F "bigfile=@src/main/resources/bigfile" --limit-rate 5k http://localhost:8080/async-read

然后观察服务端的打印输出:

Servlet thread: http-nio-8080-exec-3
ReadListener thread: http-nio-8080-exec-3
Read: 16538 bytes
ReadListener thread: http-nio-8080-exec-4
Read: 16384 bytes
ReadListener thread: http-nio-8080-exec-5
Read: 16384 bytes
ReadListener thread: http-nio-8080-exec-7
Read: 16384 bytes
ReadListener thread: http-nio-8080-exec-6
Read: 16384 bytes
ReadListener thread: http-nio-8080-exec-8
Read: 16384 bytes
ReadListener thread: http-nio-8080-exec-9
Read: 16384 bytes
ReadListener thread: http-nio-8080-exec-10
Read: 2312 bytes
ReadListener thread: http-nio-8080-exec-1
Read: 48 bytes
Total Read: 117202 bytes

可以从输出看到除了doGet和第一次进入onDataAvailable是同一个Http thread之外,后面的read动作都发生在另外的Http thread里。
这是因为客户端的数据推送速度太慢了,容器先将Http thread收回,当容器发现可以读取到新数据的时候,再分配一个Http thread去读InputStream,如此循环直到全部读完为止。

注意:HttpServletRequest.getInputStream()getParameter*()不能同时使用。

Async Write

再来一段客户端下载慢的例子,AsyncWriteServlet.java

@WebServlet(value = "/async-write", asyncSupported = true)
public class AsyncWriteServlet extends HttpServlet {

  @Override
  protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException {

    System.out.println("Servlet thread: " + Thread.currentThread().getName());
    AsyncContext asyncCtx = req.startAsync();
    ServletOutputStream os = resp.getOutputStream();
    InputStream bigfileInputStream = ClassLoader.getSystemClassLoader().getResourceAsStream("bigfile");

    os.setWriteListener(new WriteListener() {

      @Override
      public void onWritePossible() throws IOException {

        int loopCount = 0;
        System.out.println("WriteListener thread: " + Thread.currentThread().getName());
        while (os.isReady()) {
          loopCount++;
          System.out.println("Loop Count: " + loopCount);
          byte[] bytes = readContent();
          if (bytes != null) {
            os.write(bytes);
          } else {
            closeInputStream();
            asyncCtx.complete();
            break;
          }
        }
      }

      @Override
      public void onError(Throwable t) {

        try {
          os.print("Error happened");
          os.print(ExceptionUtils.getStackTrace(t));
        } catch (IOException e) {
          e.printStackTrace();
        } finally {
          closeInputStream();
          asyncCtx.complete();
        }

      }

      private byte[] readContent() throws IOException {
        byte[] bytes = new byte[1024];
        int readLength = IOUtils.read(bigfileInputStream, bytes);
        if (readLength <= 0) {
          return null;
        }
        return bytes;
      }

      private void closeInputStream() {
        IOUtils.closeQuietly(bigfileInputStream);
      }
    });

  }

}

同样利用curl做慢速下载,curl --limit-rate 5k http://localhost:8080/async-write

接下来看以下服务端打印输出:

Servlet thread: http-nio-8080-exec-1
WriteListener thread: http-nio-8080-exec-1
Write bytes: 8192
WriteListener thread: http-nio-8080-exec-2
Write bytes: 8192
WriteListener thread: http-nio-8080-exec-3
Write bytes: 8192
WriteListener thread: http-nio-8080-exec-4
Write bytes: 8192
WriteListener thread: http-nio-8080-exec-5
Write bytes: 8192
WriteListener thread: http-nio-8080-exec-6
Write bytes: 8192
WriteListener thread: http-nio-8080-exec-7
Write bytes: 8192
WriteListener thread: http-nio-8080-exec-8
Write bytes: 8192
WriteListener thread: http-nio-8080-exec-9
Write bytes: 8192
WriteListener thread: http-nio-8080-exec-10
Write bytes: 8192
WriteListener thread: http-nio-8080-exec-1
Write bytes: 8192
WriteListener thread: http-nio-8080-exec-2
Write bytes: 8192
WriteListener thread: http-nio-8080-exec-3
Write bytes: 8192
WriteListener thread: http-nio-8080-exec-4
Write bytes: 8192
WriteListener thread: http-nio-8080-exec-5
Write bytes: 2312

PS. 后发现即使没有添加--limit-rate参数,也会出现类似于上面的结果。

Jmeter

上面两个例子使用的是curl来模拟,我们也提供了Jmeter的benchmark。

需要注意的是,必须在user.properties文件所在目录启动Jmeter,因为这个文件里提供了模拟慢速连接的参数httpclient.socket.http.cps=5120。然后利用Jmeter打开benchmark.xml。

相关资料


如果觉得我的文章对你有用,请随意赞赏

你可能感兴趣的

尘心 · 2018年03月14日

1、对于写输出流,如果统计一下:传统IO模式和异步IO模式下,服务端处理时间和IO等待时间,做一下对比就好了。
2、写输出流用的os.write(bytes);为什么仍然是阻塞IO,而不是用异步IO?这会不会导致线程仍然有部分时间花在IO等待上?假定能够对IO等待时间做出测量的话,试着改一下byte[]的长度,性能可能会差别很大。
3、ServletOutputStream.onWritePossible()的触发时机是什么?是一旦TCP连接的输出缓冲区有空闲字节就触发?还是整个缓冲区数据全部发送出去,缓冲区被清空才触发呢?
onWritePossible()被触发时,如果业务处理还没有完成,立即返回的话,这个方法会不会被立即再次调用呢?导致大部分时间在无效的onWritePossible()调用里?

回复

尘心 · 2018年03月14日

调用ServletOutputStream.write(bytes);方法来写输出流,那么如果bytes大小超出了底层缓冲区剩余可写字节的话,就会发生写IO阻塞。

回复

载入中...