头图

foreword

When refactoring code, we often struggle with questions like:

  • Need further abstraction? Will it lead to over-engineering?
  • If further abstraction is required, how to abstract it? Are there any general steps or rules?

Unit testing is a commonly used tool to verify the correctness of code, but if it is only used to verify the correctness, it is really "cannon to hit mosquitoes" - overkill, it can also help us judge the abstraction degree and design level of the code. This article will also put forward an idea of "testability" as the goal, and iteratively refactor the code. Using this idea, in the face of any complex code, the idea of refactoring can be deduced step by step.

In order to ensure intuitiveness, this article will use a "producer-consumer" code refactoring example throughout.

example

Refactoring & Unit Test

Before starting the "producer consumer" code refactoring example, let's talk about refactoring.

What motivates programmers to refactor a piece of code? There may be different opinions:

  • The code is not concise enough?
  • Bad maintenance?
  • Not in line with personal habits?
  • Over-engineered, hard to understand?

In a nutshell, it is to reduce the rate of corruption of code and architecture, and reduce the cost of maintenance and upgrades. In my opinion, the only way to guarantee software quality and complexity is "continuous refactoring".

Here is another question, what kind of code/architecture is easy to maintain? There are many design principles in the industry for this problem, such as the open-closed principle, the single responsibility principle, the dependency inversion principle, and so on. But today I want to look at it from another point of view, whether testability can also be used as a standard for measuring the quality of code. In general, testable code is generally clean and maintainable at the same time, but clean and maintainable code is not necessarily testable.

Therefore, from this point of view, refactoring is to enhance the testability of the code, that is, refactoring for single test. On the other hand, do you dare to refactor a piece of code without a single test? In this way, the relationship between unit testing and reconstruction is inseparable.

Next, let's look at a simple example to experience the refactoring for single test .

 public void producerConsumer() {
        BlockingQueue<Integer> blockingQueue = new LinkedBlockingQueue<>();
        Thread producerThread  = new Thread(() -> {
            for (int i = 0; i < 10; i++) {
                blockingQueue.add(i + ThreadLocalRandom.current().nextInt(100));
            }
        });
        Thread consumerThread = new Thread(() -> {
            try {
                while (true) {
                    Integer result = blockingQueue.take();
                    System.out.println(result);
                }
            } catch (InterruptedException ignore) {
            }
        });
        producerThread.start();
        consumerThread.start();
    }

The above code can be divided into 3 parts:

  • Producer: Add 10 pieces of data to the blocking queue. Specific logic: Add random numbers from [0,100) to each number from 0 to 9.
  • Consumer: Get numbers from blocking queue, and print
  • Main thread: start two threads, the producer and the consumer

This code still looks quite concise, but is it a good piece of code? Try adding unit tests to this code. Just running this code is definitely not enough, because we can't confirm that the production and consumption logic is executed correctly. I can only sigh "I don't know how to start", this is not because our unit test writing skills are not enough, but because of the problems in the code itself:

  1. Violating the principle of single responsibility: this function does three things at the same time: data transfer, data processing, and thread start. Unit tests that take into account these three functions are difficult to write.
  2. This code itself is non-repeatable, which is not conducive to unit testing. Non-repeatability is reflected in

    • The logic that needs to be tested is in an asynchronous thread, when does it execute? When is it done? are uncontrollable
    • Logic contains random numbers
    • Consumers output data directly to standard output. It is impossible to determine what the behavior is in different environments. It may be output to the screen or redirected to a file.

Having said that, let's stop for a while and discuss a point, "what does it mean to be testable"? Because as mentioned earlier, the purpose of refactoring is to make the code testable, it is necessary to focus on this concept here.

What does testable mean?

First of all, we need to understand what it means to be testable. If a piece of code is testable, then it must meet the following conditions:

  1. Complete test cases can be designed locally, called fully covered unit tests;
  2. As long as the fully covered unit test cases are all run correctly, then this piece of logic must be no problem;

Further, if the return value of a function is only related to the parameters, as long as the parameters are determined, the return value is uniquely determined, then such a function must be completely covered. This nice feature is called referential transparency.

But most of the code in reality does not have such good properties, but has many "bad properties", which are often called side effects:

  1. The code contains a remote call, and it is impossible to determine whether the call will succeed;
  2. Contains random number generation logic, resulting in undefined behavior;
  3. The execution result is related to the current date. For example, the alarm clock will sound only in the morning of the working day;

Fortunately, we can use some tricks to extract these side effects from the core logic.

"Referential transparency" requires that the output parameters of a function are uniquely determined by the input parameters. The previous example is easy to cause misunderstandings. It is believed that the output parameters and input parameters must be data. It can also be referentially transparent.

Ordinary functions can also be called first-order functions, and functions that receive functions as parameters or return a function are called higher-order functions , and higher-order functions can also be referentially transparent.

For a higher-order function f(g) (g is a function), as long as the return logic is fixed for a specific function g, it is referentially transparent, regardless of whether the parameter g or the returned function has side effects. Using this feature, we can easily convert a function with side effects into a referentially transparent higher-order function.

A typical function with side effects is as follows:

 public int f() {
        return ThreadLocalRandom.current().nextInt(100) + 1;
    }

It generates a random number and increments it by 1, which makes it untestable because of this random number. But let's turn it into a testable higher-order function, just pass in the random number generation logic as a parameter and return a function:

 public int g(Supplier<Integer> integerSupplier) {
        return integerSupplier.get() + 1;
    }

The above g is a referentially transparent function. As long as a number generator is passed to g, the return value must be a logic of "generate a number with a number generator and add 1", and there are no branch conditions and boundary conditions. A single use case can cover:

 public void testG() {
        Supplier<Integer> result = g(() -> 1);
        assert result.get() == 2;
    }

Here I use Lambda expressions to simplify the code, but "function" does not just refer to Lambda expressions, objects, interfaces, etc. of the hyperemia model in OOP. As long as they contain logic, their transmission and return can be regarded as "function".

First round of reconstruction

We go back to the producer-consumer example at the beginning of this chapter and refactor it using what we learned in the previous chapter.

The first problem that the code cannot be tested is that the responsibilities are not clear. It does both data transmission and data processing. Therefore, we consider extracting the code of producer-consumer data transfer separately:

 public <T> void  producerConsumerInner(Consumer<Consumer<T>> producer,
                                           Consumer<Supplier<T>> consumer) {
        BlockingQueue<T> blockingQueue = new LinkedBlockingQueue<>();
        new Thread(() -> producer.accept(blockingQueue::add)).start();
        new Thread(() -> consumer.accept(() -> {
            try {
                return blockingQueue.take();
            } catch (InterruptedException e) {
                throw new RuntimeException(e);
            }
        })).start();
    }

The responsibility of this piece of code is very clear, and the goal of writing unit tests for this method is also very clear, that is, to verify that the data can be correctly passed from the producer to the consumer. But soon we encountered the second problem mentioned earlier, that is, the asynchronous thread is uncontrollable, which will lead to unstable execution of the single test. Using the method in the previous chapter, we extract the executor as an input parameter:

 public <T> void  producerConsumerInner(Executor executor,
                                      Consumer<Consumer<T>> producer,
                                      Consumer<Supplier<T>> consumer) {
        BlockingQueue<T> blockingQueue = new LinkedBlockingQueue<>();
        executor.execute(() -> producer.accept(blockingQueue::add));
        executor.execute(() -> consumer.accept(() -> {
            try {
                return blockingQueue.take();
            } catch (InterruptedException e) {
                throw new RuntimeException(e);
            }
        }));
    }

At this point we write a stable unit test for it:

 private void testProducerConsumerInner() {
        producerConsumerInner(Runnable::run,
                (Consumer<Consumer<Integer>>) producer -> {
                    producer.accept(1);
                    producer.accept(2);
                },
                consumer -> {
                    assert consumer.get() == 1;
                    assert consumer.get() == 2;
                });
    }

As long as this test can pass, it can be shown that there is no problem with production and consumption logic. Isn't it surprising that a logic that looks much more complex than the previous piecewise function is essentially just an identity function on its domain (because only one use case can cover all cases).

If you don't like the above functional programming style, you can easily transform it into an OOP-style abstract class

 public abstract class ProducerConsumer<T> {

    private final Executor executor;

    private final BlockingQueue<T> blockingQueue;

    public ProducerConsumer(Executor executor) {
        this.executor = executor;
        this.blockingQueue = new LinkedBlockingQueue<>();
    }
    
    public void start() {
        executor.execute(this::produce);
        executor.execute(this::consume);
    }

    abstract void produce();

    abstract void consume();

    protected void produceInner(T item) {
        blockingQueue.add(item);
    }

    protected T consumeInner() {
        try {
            return blockingQueue.take();
        } catch (InterruptedException e) {
            throw new RuntimeException(e);
        }
    }
}

At this point the unit test will look like this:

 private void testProducerConsumerAbCls() {
        new ProducerConsumer<Integer>(Runnable::run) {
            @Override
            void produce() {
                produceInner(1);
                produceInner(2);
            }

            @Override
            void consume() {
                assert consumeInner() == 1;
                assert consumeInner() == 2;
            }
        }.start();
    }

main function

 public void producerConsumer() {
        new ProducerConsumer<Integer>(Executors.newFixedThreadPool(2)) {
            @Override
            void produce() {
                for (int i = 0; i < 10; i++) {
                    produceInner(i + ThreadLocalRandom.current().nextInt(100));
                }
            }

            @Override
            void consume() {
                while (true) {
                    Integer result = consumeInner();
                    System.out.println(result);
                }
            }
        }.start();
    }

Second round of reconstruction

In the first round of reconstruction, we only ensured that the data transfer logic is correct, and in the second round of reconstruction, we will further expand the testable scope.

There are two other factors in the code that influence us to further expand the scope of testing:

  • Random number generation logic
  • print logic

Just extract these two logics as before:

 public class NumberProducerConsumer extends ProducerConsumer<Integer> {

    private final Supplier<Integer> numberGenerator;

    private final Consumer<Integer> numberConsumer;

    public NumberProducerConsumer(Executor executor,
                                  Supplier<Integer> numberGenerator,
                                  Consumer<Integer> numberConsumer) {
        super(executor);
        this.numberGenerator = numberGenerator;
        this.numberConsumer = numberConsumer;
    }

    @Override
    void produce() {
        for (int i = 0; i < 10; i++) {
            produceInner(i + numberGenerator.get());
        }
    }

    @Override
    void consume() {
        while (true) {
            Integer result = consumeInner();
            numberConsumer.accept(result);
        }
    }
}

This time, the style of OOP and functional mixing is adopted. You can also consider changing the two method parameters of numberGenerator and numberConsumer to abstract methods, which is a more pure OOP.

It also requires only one test case for full coverage:

 private void testProducerConsumerInner2() {
        AtomicInteger expectI = new AtomicInteger();
        producerConsumerInner2(Runnable::run, () -> 0, i -> {
            assert i == expectI.getAndIncrement();
        });
        assert expectI.get() == 10;
    }

At this point the main function becomes:

 public void producerConsumer() {
        new NumberProducerConsumer(Executors.newFixedThreadPool(2),
                () -> ThreadLocalRandom.current().nextInt(100),
                System.out::println).start();
    }

After two rounds of refactoring, we refactored a very random spaghetti code into a very elegant structure. In addition to being more testable, the code is also more concise, abstract, and reusable. These are all brought about by the single-test refactoring. Additional benefits.

You may notice that even after two rounds of refactoring, we still don't test the main function producerConsumer directly, but are just infinitely close to covering all the logic inside, because I don't think it's within the "test boundary", I'm more I tend to test it with integration tests, which are beyond the scope of this article. The next chapter focuses on testing boundaries.

Unit Testing Boundaries

The code within the boundary is the code that can be effectively covered by the unit test, while the code outside the boundary is not guaranteed by the unit test.

The refactoring process described in the previous chapter is essentially a process of expanding the test boundaries in exploration. However, the boundary of unit testing cannot be expanded indefinitely, because there must be a large number of untestable parts in the actual project, such as RPC calls, sending messages, doing calculations based on the current time, etc., they must be passed into the test somewhere boundary, and this part is not testable.

The ideal test boundary should be like this. All the core and complex logic in the system is contained within the boundary, and there is no logic outside the boundary. Very simple code, such as a line of interface calls. In this way, any changes to the system can be quickly and fully verified in the unit test. In the integration test, only a simple test is required. If there is a problem, it must be a wrong understanding of the external interface, not the internal change of the system. wrong.

A clear division of unit test boundaries is conducive to building a more stable system core code, because in the process of advancing the test boundary, we will continue to strip side effects from the core code, and finally get a complete and testable core, just like The comparison below is the same:

ut1.png

Good code is never achieved overnight. It is written first, and then iteratively and reconstructed gradually. From this perspective, there is no big difference between refactoring other people's code and writing new code.

From the above, we can conclude a simple refactoring workflow:

ut2.png

According to this method, a set of elegant and testable code can be gradually iterated. Even if the ideal test boundary is not iterated due to time problems, there will still be a set of most testable code, and future generations can use the previous test cases in the future. On the basis, continue to expand the test boundary.

over-engineered

Let's talk about overdesign again.

According to the method of this article, it is impossible to have the problem of over-design. Over-design generally occurs when designing for design and imitating design patterns. However, all designs in this article have a clear purpose - to improve the "testability" of the code. , all techniques are used inadvertently in the process, there is no blunt problem.

Moreover, over-design will lead to poor "testability". Over-designed code often abstracts its own core logic, resulting in nowhere to test unit tests. If you find that a piece of code is "very concise and abstract, but it's not easy to write unit tests", then there is a high probability that it is over-designed.

Difference from TDD

TDD has not been mentioned yet in this article, but the content explained above must have reminded many readers of this term. TDD is short for "test-driven development", which emphasizes writing use cases before coding, including three step:

  • Red light: write use case, run, fail use case
  • Green light: pass the test with the fastest and dirtiest code
  • Refactoring: Refactoring code to be more elegant

Repeat these three steps continuously during development.

However, it will be found in practice that it is difficult to write test cases first in busy business development. There may be the following reasons:

  • The code structure has not been completely determined, and the entrance and exit have not yet been defined. Even if unit tests are written in advance, there is a high probability that they will be modified later.
  • The one-sentence requirements of the product, plus the lack of familiarity with the system, make it difficult to write use cases before development

Therefore, the workflow of this article will make some adjustments in sequence, write code first, and then continue to refactor the code to adapt to unit testing to expand the testing boundary of the system.

However, from a broader perspective on TDD, the general idea of this article is similar to TDD, or the title can be changed to "TDD in Practice".


skyarthur
1.6k 声望1.3k 粉丝

技术支持业务,技术增强业务,技术驱动业务