Reasoning about Complexity – Part 2

Reasoning about Complexity – Part 2

By Lucian Radu Teodorescu

Overload, 31(176):12-17, August 2023


Understanding code could increase our productivity by an order of magnitude. Lucian Radu Teodorescu introduces a complexity measure to help us reason about code to tackle complexity.

This is the second part of the article on reasoning and complexity. In the first part, we argued the importance of reasoning in software engineering, and started exploring some dimensions of reasoning that we might apply in our field.

In this part, we use this reasoning to tackle complexity. Starting from Brooks’s ‘No Silver Bullet’ article [Brooks95], we make the distinction between what’s essential and what’s accidental in software engineering. To properly reason about these two, we make a sharp distinction between the two. We define a framework for analysing essential complexity in a more formal manner. For accidental complexity, we cannot find such a formal system, but we use the discussion about reasoning from the first part to give us a hint on how we can approach accidental complexity found in software.

If the first part of the article looked like an essay, this part is like a play in 14 acts.

Essential and accidental complexity

Act 1: The actors introduce themselves

Brooks talks about software as having two types of difficulties: essential and accidental [Brooks95]. In the first category, we can find the difficulties inherent to the nature of the software, and in the second, those difficulties that today attend its production but are not that inherent. In the inherent category, Brooks lists complexity (no two parts are alike), conformity (there isn’t a more fundamental level of software so that we can reduce all the software to that level), changeability (software is constantly changing), and invisibility (software cannot be drawn in space; software is many-many-dimensional).

Brooks argues that by now we have solved a major part of the accidental difficulties and what’s left are essential difficulties, and we cannot get a ten-fold increase in productivity as we cannot improve on these essential difficulties. He lists a series of promising technologies and paradigms and argues that they cannot bring that ten-fold increase in productivity.

If we strictly follow the wording of Brooks, and his categorisation, it makes no sense for us to discuss essential complexity. Complexity is always inherent, is always essential. However, in present times, we often shift the terms of Brook’s problem into essential complexity and accidental complexity. For example, Kevlin Henney makes use of these new terms [Henney22b].

But neither of the two approaches properly defines a clean boundary between what’s essential and what’s accidental. This is always left to interpretation.

Act 2: The dilemma

Let’s take an example. Let’s say we have a project in which we need to create an echo server: it accepts TCP/TLS connections, and whenever it receives a message, it replies with the content of the received message.

By both accounts, there is an inherent complexity of the problem itself (accepting connections, communication protocols, reading and writing messages, etc.) This is most probably essential.

Now, if one chooses to solve this problem imperatively (i.e., using object-oriented programming) or in a functional manner, is this essential or accidental? A naive read of Brooks may suggest that this is essential, not accidental. After all, the paradigm we are using has a great influence on the data structures, algorithms, and function invocations we need to solve the problem. On the other hand, Kevlin argues that this is part of the accidental.

Moving on, we are making a choice of a programming language to use, and then probably multiple choices of which technologies to be used in this project. Is this essential or accidental? When we implement this, we may have a clean implementation, or maybe we end up with a lot of technical debt in our implementation. Does technical debt account for essential or for accidental?

Furthermore, during the implementation of this project, we choose some tooling to facilitate our development. We are probably using Git, we may want to use CI/CD, we may want to create some architecture documentation, we may want to write some detail design documentation, etc. We may also have processes to follow that dictate which individuals should work on the project, which individuals need to be informed, which need to review, which must approve various deliverables during the lifetime of the project. Most of these seem to be related to the accidental part; there are plenty of difficulties that we have to solve in order to get the project complete.

Act 3: An unexpected event; war preparations

To make progress on the previous dilemma, let’s set a convention: we talk about the essential complexity of a problem as being the complexities inherently associated with the problem, and not what different solutions may look like. If there are two solutions to the same problem, one of them being less complex, and one being more complex, we say that the problem is no more complex than the first solution.

This is consistent with Kevlin’s perspective. [Henney22b]

Furthermore, let’s assume that we can associate a value with the complexity of a problem and the corresponding solutions. If P is a problem and S1, S2, …, Sn are solutions to P, and C(Si) is the complexity associated with solution Si, then we can say that the complexity of the problem P is defined by:

C(P) = min C(Si)

That is, the complexity of the problem is the minimum possible complexity of all the solutions.

In this setup, we assume that the complexity function C(Si) is only a function of the code for solution Si, and does not relate to any difficulties about producing solution Si (i.e., processes, build systems performance, etc.).

Furthermore, we want to distinguish between the difficulties associated with the code of solution Si, and the difficulties related to the tools and processes that were used to produce Si. We will mainly focus on the actual complexity of the code and ignore any complexity that is not directly visible in the code. Based on the data we have (mostly informal) the main difficulty is dealing with the code itself (for example, we spend more time reasoning about the code than using git to submit a patch).

We also discard any non-functional requirements and constraints that were not explicitly specified in the problem domain. For example, if the problem is just sort an array of elements, then sleep-sort and bogo-sort [Henney22a] are suitable solutions for the problem, and we consider them while evaluating the complexity of the problem.

Let’s say that each solution can be represented as a graph; for example, let’s say that the nodes are instructions, and the links are relationships between instructions. We count the complexity of the solution as being the sum of complexities associated with each node and each link. This is a relatively simplistic model, and not very precise, but it gives us a good approximation of what we need. It turns out we don’t need anything else to have basic reasoning about the complexity.

We can complicate this model by allowing the nodes of a graph to be formed by other graphs, not just by instructions. This way, we build a hierarchy of graphs for representing solutions.

All the complexities of a solution that do not appear as complexities of the problem can be labeled as accidental. We don’t have (yet) a good measure for accidental complexity.

Reasoning on essential complexity

Act 4: An old ally

The reader may be familiar with an old ally of ours from an older episode named ‘Performance Considered Essential’ [Teodorescu22]. While arguing why performance is important for all (practical) software problems, we were helped by our friend: the_one_algorithm. This algorithm solves most practical problems by trying out all possible combinations of outputs (i.e., using backtracking), and selecting the one that matches the expected requirements. That is, we need the requirements of the problem to be encoded as tests. The algorithm would try any possible combination of output values, and checks if the output can be a solution to the problem input.

For a problem that doesn’t have explicit performance constraints, if we can find a set of tests that properly captures the requirements of the problem, the use of the_one_algorithm is a solution to our problem. Thus, we can use this powerful ally to launch an attack on the complexity value of a problem.

For most problems, the easiest way to derive the set of tests is to start analysing the requirements of the problem. We have functional requirements, non-functional requirements (quality attributes) and constraints. Usually, the functional requirements are the only ones that are explicit and directly associated with the problem; however, our construction works even if we make non-functional requirements explicit. And, as we said, we only care about explicit requirements when assessing a problem.

Thus, in most cases, to satisfy our ally, we would iterate over the list of explicit requirements and provide a list of one or more tests that we can apply. This list of tests can then be transformed using conjunction into a global test for a solution of the problem.

Act 5: The first complexity wars

Let us prepare our attempt at conquering essential complexity.

We have a problem P, that has a set of requirements R1, R2, …Rn. For this set of requirements, we come up with a set of tests T1, T2, … Tn. A test can be simple or more complex. For each test, we can define an underlying problem, so that means that we can associate a complexity value to it. Let’s say the complexity values for the tests will be C1, C2, … Cn.

Our the_one_algorithm algorithm also has a complexity. Let’s note that with C0.

It is worth mentioning that the actual complexity values we associate don’t matter that much to our approach. For simplicity, we can define a basis, a fixed set of instructions/algorithms that all have complexity equal to 1. For complex operations that are composed of basis operations, we can calculate the complexity appropriately.

For example, it makes sense to associate a complexity value of 1 to our the_one_algorithm ally. We understand it enough to reason about it, we don’t always have to analyse its constituent parts each time we are interested in analysing the complexity of a problem/solution.

Furthermore, to simplify things, we can always find the tests T1, T2, … Tn to be independent of each other. That is, the complexity of the overall test will just be the sum of the complexity of the individual requirement tests.

Moreover, we assume that the sequence of tests T1, T2, … Tn is the simplest that we can find.

With this, for a problem P for which we find the solution of using the_one_algorithm with associated tests T1, T2, … Tn, we find that the complexity of the solution is:

C(solution) = C0 + ∑ C(Ti)

Thus, the complexity of the problem is

C(P) ≤ C0 + ∑ C(Ti)

In plain English, for every problem, the complexity of the problem is at least one plus the complexity of all the tests we need to fully specify the requirements.

Act 6: The peace treaty

For most problems, the complexity obtained in this way is probably smaller than the complexity obtained by analysing the algorithm itself. (Note to future self: this is not properly argued; the reader didn’t receive proper reasoning to support this statement.) Thus, we can approximately define the complexity of the problem as the complexity of the solution involving the_one_algorithm.

Even if the above statement is not true in all cases, we can still use it to compare two problems, even if the comparison is approximate. For example, we can compare the essential complexity of a problem defined as sort N elements with the problem defined as stable sort N elements. For the second problem, we have more requirements, thus more tests to be performed, so the complexity is greater:

C(P2) > C(P1)

Thus, for practical purposes, we will use the complexity of our solution involving the backtracking algorithm as an approximation of the essential complexity of the problem:

C(P) ∼ C0 + ∑ C(Ti)

And thus, we have a definition for the essential complexity of a problem, even if this is just an approximation.

Act 7: Aftermath; an example

Let’s say that we assign complexities of 1 to the following operations:

  • running our backtracking algorithm (the_one_algorithm)
  • accessing elements in an array (either input or output)
  • comparing two elements (of the same type) for equivalence or for ordering
  • comparing indices
  • using the existential or universal quantifier on one variable, with a predicate (predicate complexity is added separately)
  • implication operator →

Let’s say that we have P1 as sort N elements of an array, in place. This problem can be defined thanks to the following tests (in the interest of space, not extremely formal, more like a sketch):

  • T1: ∀ i ∈ [0,N), ∃ j ∈ [0,N), arrayorig[i] == arrayfinal[j] in English: all the elements that were initially in the array are still present in the output array
  • T2: ∀ i ∈ [0,N), ∃ j ∈ [i + 1,N), arrayfinal[i] ≤ arrayfinal[j] in English: the elements in the final array are sorted

With the above rules, the complexities associated with the tests are C(T1) = 5 (one forall, one exists, two array accesses and one equality comparison) and C(T2) = 5 (same).

That is, C(P1) = 1 + 5 + 5 = 11.

Let’s now take P2 to mean search an element X in an array of N elements; if found, return its index (R), otherwise return NULL. We can have the following tests for this algorithm:

  • Ti: R ≠ NULL → R ∈ [0,N) && array[R] == X
  • T2: R = NULL → ¬∃i ∈ [0,N), array[i] == X

With the tests written this way, we can have C(P2) = 1 + 4 + 5 = 10.

Act 8: Enjoying the victory

After a relatively long journey, we have managed to define a metric that can approximate essential complexity. Considering the fact that we started from not knowing what essential complexity is, I hope the reader agrees with me that this is a pretty good result.

This allows us to compare problems in terms of essential complexity, allowing us to say that one problem is more complex than another.

But there is another interesting change that we’ve achieved here. We managed to simplify our reasoning about the problem by transforming it from something that is inherently complex into a linear sequence of predicates. Instead of having a quadratic reasoning of the problem, we now can apply a linear algorithm for reasoning about its complexity.

Why quadratic? Well, on an inherently complex problem, one can assume that every part of the problem is connected to every other part of the problem. If the problem has N parts, then there may be N(N-1)/2 connections inside the problem.

It turns out that any transformation that enables representation of the problem in a linear form can enable a simplification in how we reason about the problem.1

Reasoning on accidental complexity

Act 9: Dark clouds gather again in our minds

We’ve won the first battle, we’ve found a way to reason about essential complexity, but we haven’t won the war.

By construction, we’ve moved into essential complexity only what is inherently related to the problem, but all the practical things about various solutions have been left in the accidental complexity part.

If, for example, we have a concrete implementation of a sorting algorithm, we don’t have a good way of reasoning about it. Does it matter the choice of programming paradigm, or the choice of programming language, or the choice of the algorithm being used? Of course, it does.

Unfortunately, accidental complexity radiates from essential complexity. Similar to how a black hole radiates light, in the same way, essential complexity continuously generates accidental complexity. Particles split at the border of a black hole, part of them being pulled into a black hole and part of them being emitted as light from the direction of the black hole. Similarly, trying to write code for solving essential parts of the problem always creates more accidental difficulties.

For example, creating functions to solve a particular aspect of the problem always comes with naming them, with dividing the logic in two parts (what’s inside the function and what’s outside the function) and having different types of coupling between those two parts. Naming and these divisions are not inherently to the problem, so they are accidental complexity. Even the fact that we created a function has introduced a new element into our program that we have to reason about (it provides benefits, but always has costs too).

In all software projects, there is always a dark force in the accidental complexity that we have to constantly face. And, as the main bottleneck is our brain, the only weapon we seem to have against it is by improving our ways of reasoning about the problem.

Act 10: Sorting it out

Let’s consider the problem of sorting, in place, an array of numbers. We’ve already shown a system in which the essential complexity of the problem equals 5. Let’s consider now the complexity of a sorting solution, namely insertion sort. Listing 1 shows a C++ implementation.

// inputs: int n, int arr[]
for (int i=1; i<n; i++) {
  int key = arr[i];
  int j=i-1;
  // Move elements in arr[0..i-1] that are greater
  // than the key one step right
  while (j>=0 && arr[j]>key) {
    arr[j+1] = arr[j];
    j--;
  }
  // Put the element at the right position
  arr[j+1] = key;
  // Postcondition: arr[0..i] is sorted
}
Listing 1

To analyse the complexity of this algorithm, we would use the metric that we introduced in ‘How We (Don’t) Reason About Code’ [Teodorescu21]. That is, we count all the postconditions that can infer by reading the code. To make things simpler, we would not count the syntactical aspects of the code, and not bother about the types and semantic information present in the code. We would only reason about the possible values. And, even here, we would take some small shortcuts to keep things simple. We would compute the reasoning complexity of the code, by counting the number of postconditions we can infer from the code.

Here it is:

  1. i is always greater or equal to 1;
  2. i is always less than n in the body of the loop;
  3. key always has a value of an array element (arr[i]);
  4. j starts as i-1;
  5. j is never incremented;
  6. j is decremented each time the body of the while loop is run;
  7. j is always less than i;
  8. j is always greater or equal to 0 in the while body;
  9. in the while body, arr[j] is always a valid value in the range arr[0..i-1];
  10. in the while body, arr[j+1] is always a value in the range arr[1..i];
  11. if arr[j]>key then we move the element arr[j] one position right, overwriting the value we have there;
  12. while shifting the elements right in the body of the while loop, we are not losing the value of any element (considering that the value of arr[i] is stored in key);
  13. in the while body arr[j+1] is always a value in the range arr[0..i-1];
  14. at the end of the while loop, if j>=0 then arr[j] <= key;
  15. at the end of the while loop, arr[j+1] > key;
  16. at the end of the while loop, all the elements arr[j+1..i-1] (assuming j+1<=i-1) are moved one position to the right (keeping their order);
  17. if all the elements in range arr[0..i-1] are sorted at the start of the for loop, then at the end of the while loop, all elements in range arr[0..j] (assuming j>=0) are smaller than key;
  18. if all the elements in range arr[0..i-1] are sorted at the start of the for loop, then at the end of the while loop, all elements in range arr[j+1..i-1] (assuming j+1<=i-1) are greater than key;
  19. storing the value of key at arr[j+1] does not lose any value from the original array;
  20. if all the elements in range arr[0..i-1] are sorted at the start of the for loop, then at the end of the for loop, all the elements in range arr[0..i] would be sorted;
  21. at the end of the for loop, all the elements original present in the input array will still be present in the array;
  22. at the end of the for loop, all the elements will be sorted.

In the end, the reasoning complexity of this sorting algorithm is 22, as we have 22 preconditions to complete our reasoning. The astute reader may remark that we went quickly over the items that required induction. A more in-depth analysis would probably yield a bigger complexity for our algorithm.

In the case of this reasoning complexity, we counted the number of postconditions that we can deduce from the code. Previously, when measuring the essential complexity, we measured the number of elements of the predicates that describe the problem. We have two slightly different approaches, but their core is the same: counting the number of reasoning units involved in the two things. Generalising, we can say that the two metrics are compatible.

While it’s not quite correct, we can compare the essential complexity value of 5 for the problem of sorting in place, with the reasoning complexity of 22 for the insertion sort algorithm. This gives us an indication that the complexity of a solution is, in general, higher than the complexity of the problem. The difference is accidental complexity.

Please note that, for this example, the complexity of the solution is 4.4 times bigger than the essential complexity.

Act 11: Self-inflicted pain

It is unclear to me why this is the case, but it feels to me that software engineers are the most masochistic out of all the engineering disciplines I know of. The amount of self-inflicted pain that software engineers cause is staggering. It feels to be even more than the number of problems that are solved.

Bugs, technical debt, optimistic estimates, late projects, you name it. They all come with accidental complexity.

Starting from the assumption that all engineers want to avoid such pain, the problem lies somewhere between the actions that engineers undertake and the consequences of those actions. I’m trying to avoid going through the rabbit hole of entering a discussion on moral logic.

This disconnect is most probably generated by incomplete reasoning. If I’m doing action A now, I must not fully realise that it leads to the consequences C that are harmful to me.

This comes back to the ideas that I touched on in the first part of this article. We need to get better at reasoning about different aspects of software engineering. If we do, then maybe we figure out better strategies to reduce the amount of pain and accidental complexity that have left.

Act 12: Linearising the problem

If we have a problem (or a sub-problem for that matter) that is complex to understand, then perhaps linearising the problem will make it easier for us to reason about it.

To make this clearer, let’s repeat the reasoning we had above when we analysed the process of reasoning about essential complexity. Let’s assume that the problem has N parts (potentially each of these parts hiding more complexity). In a system with N parts, there can be N(N-1)/2 communication channels. That is a quadratic order of magnitude.

The more connections there are between the parts of the problem, the more complex the problem is for us, as it gets harder and harder to reason about it.

For example, if we have a 10 parts problem, we have 45 connections between these parts. Thus, to fully reason about this problem, we need to keep track of 45 connections and 10 parts, in total 55 things. If, however, we can arrange the parts in a sequence, and each part would only be related to the adjacent part, then there would be only 9 communication channels. In total, 19 things to keep track of. The difference between 45 and 19 is significant.

But even this isn’t our biggest difficulty. We may face a bigger challenge when trying to reason about the non-linearised problem. Studies show that we can only keep track of 7 things at once (plus/minus 2) [Miller56]. Thus, it becomes harder for our minds to reason when the number of elements grows over this threshold.

If the 10 parts of the problem are linearised, then if one fully wants to reason about a part, they need to consider that part and the 2 relations it might have with the adjacent parts. That is, one needs to keep track of 3 things.

But, if all the parts are connected to all the other parts, one can’t easily reason about any single part. This is because they must keep track of 11 different things at the same time.

Linearisation is not always possible, but maybe we can group the parts, and reduce the cognitive load for reasoning about these parts. But, as always, we must have better reasoning strategies for breaking up systems formed from multiple parts into smaller systems. While there are great advancements in this area, I feel that we still need more thought put into how to organise the parts of our systems.

Act 13: In search of the silver bullet

After exploring essential complexity and accidental complexity, let’s quickly try to address Brooks’ question: can we find a silver bullet that would increase our productivity by an order of magnitude?

Brooks put this problem in terms of difficulties. But, let’s reduce this question to complexity. That would be: can we find a way to reduce the complexity of our code by a factor of 10?

The reader should note that the difficulty of working in the code may not be directly proportional to our complexity measure. But, in the lack of a better measure, we can assume that the difficulty is linear with the complexity number. That is, with our assumption, a code with twice the value for complexity will be twice as difficult to work with.

We start from the idea that the complexity of some code cannot be smaller than the complexity of the problem we are trying to solve. We always have some accidental complexity. Mathematically, we have C(solution) = C(problem) + C(accidental).

To have a ten-fold decrease, we need to have C(solution) > 10C(problem) or C(accidental) > 9C(problem).

Moreover, we should be able to reduce the accidental complexity by that much.

In our sorting example, the ratio between the complexity of the solution and the complexity of the problem was only 4.4. That is, for this problem, we cannot get a 10-fold improvement even if we could magically remove all the accidental complexity.

I would argue that, for most problems, we would get similar ratios. That is, the complexity of the solution isn’t 10 times bigger than the complexity of the problem.

On the other hand, there are so many code bases with heavy piles of technical debt. In those cases, one can reduce a lot of accidental complexity and get a 10x improvement on working in that codebase. But, maybe those cases are just exceptions.

As mentioned above, besides the complexity of the code we are writing for the solution, there are also difficulties related to tools and processes that are not captured in the code. Making a stance similar to Brooks, we assume that these difficulties are not significant in the grand scheme of things.2

So, with our set of assumptions, we can only confirm Brooks’ postulate:

There is no single development, in either technology or management technique, which by itself promises even one order-of-magnitude improvement within a decade in productivity, in reliability, in simplicity.

Act 14: Epilogue

The war is not over, and it probably will never be. It just leaves deep scars on countless people, who willingly or unwillingly take part in the software engineering wars.

Problems will become more and more complex, and thus we need to be prepared to have more and more tools at our disposal to fight complexity, whether it’s essential or accidental.

By now, we know what the main challenge is. It’s not about the tools, about libraries and frameworks, or following the steps of a specific process. Although all these can help. It’s about utilising our brain in a more efficient manner. And, because we cannot rewire our brain, we need to change how we structure all the activities in software engineering to better fit the model of the brain.

In our long discussion we covered three things: reasoning in software engineering (the topic of part one, in the last issue), reasoning on essential complexity, and reasoning on accidental complexity. In part one, we argued that a certain type of philosophical reasoning is fundamental for software engineering, and we set ourselves on a track to explore different reasoning strategies, with the hope that we will have a better grasp on software engineering. In the second part of this article (this issue), we approached essential and accidental complexity. While we were able to provide a framework for reasoning about essential complexity, we soon realised that this framework doesn’t directly help that much in practice. We started our exploration of accidental complexity; however, things are far muddier here, as we are dealing with almost all the aspects of software engineering. Instead of providing a semiformal description of accidental complexity, we started to reason on some aspects that seem to be of great importance. While by no means complete, we believe that the discussion covers some important aspects of accidental complexity.

After our analysis, we tried to have an answer for whether there can be a silver bullet that would reduce the difficulties of programming by a factor of 10. With a series of assumptions, we concluded that this is probably not the case. Even if it appears that accidental complexity constitutes a large part of what we need to solve in software engineering, it doesn’t fully cover 9/10 of our projects. And, even if it did, the complexity of reasoning about different parts of the solution is pretty high, so we cannot hope to increase it dramatically. Especially since we can’t rewire our brains.

Thus, once again, if we cannot rewire our brains, the only hope we have is to get better at reasoning on different aspects of our solution, and on different aspects of software engineering, and maybe trick our brain into being more productive.

References

[Brooks95] Frederick P. Brooks Jr., The Mythical Man-Month (anniversary ed.)., Addison-Wesley Longman Publishing, 1995.

[Henney22a] Kevlin Henney, ‘The Most Bogus Sort’, 2022, https://kevlinhenney.medium.com/the-most-bogus-sort-3879e2e98e67

[Henney22b] Kevlin Henney, ‘Why It Can Be Hard to Test’, 2022, https://www.youtube.com/watch?v=aqkvapSSrKg

[Miller56] George A. Miller, ‘The magical number seven, plus or minus two: Some limits on our capacity for processing information’, Psychological Review. 63 (2), 1956, http://psychclassics.yorku.ca/Miller/

[Seemann19] Mark Seemann, ‘Yes silver bullet’, 2019, https://blog.ploeh.dk/2019/07/01/yes-silver-bullet/

[Teodorescu21] Lucian Radu Teodorescu, ‘How We (Don’t) Reason About Code’, Overload 163, June 2021, https://accu.org/journals/overload/29/163/overload163.pdf#page=13

[Teodorescu22] Lucian Radu Teodorescu, ‘Performance Considered Essential’, Overload 169, June 2022, https://accu.org/journals/overload/30/169/overload169.pdf#page=6

Footnotes

  1. Please note that the problem is still as complex as before. In our case, the difficulty of the problem moved into the process of generating good tests for the problem. The process of generating a linear sequence of tests is not necessarily linear. But, once we have that transformation done, it’s much easier to reason about the problem.
  2. This is different to the point that Mark Seemann argues in his ‘Yes silver bullet’ article [Seemann19].

Lucian Radu Teodorescu has a PhD in programming languages and is a Staff Engineer at Garmin. He likes challenges; and understanding the essence of things (if there is one) constitutes the biggest challenge of all. You can contact him at lucteo@lucteo.ro






Your Privacy

By clicking "Accept Non-Essential Cookies" you agree ACCU can store non-essential cookies on your device and disclose information in accordance with our Privacy Policy and Cookie Policy.

Current Setting: Non-Essential Cookies REJECTED


By clicking "Include Third Party Content" you agree ACCU can forward your IP address to third-party sites (such as YouTube) to enhance the information presented on this site, and that third-party sites may store cookies on your device.

Current Setting: Third Party Content EXCLUDED



Settings can be changed at any time from the Cookie Policy page.