The Collaboration Commission – Part 2

I have begun this journey focused on the monkey management problem because it had to most direct impact on me to cause my burnout. Taking on too many monkeys and becoming overwhelmed with the work caused my breaking point. However, mismanagement of monkeys was not the root cause to the problems.

One potential root could be the responsibility virus problem and the fact that I would regularly revert into a comfort zone of expert engineer while failing to learn how to coach / mentor a team or manage team dynamics.

Another potential root was my failure to properly hold individuals accountable. We work in two week sprints. In several months, there has never been a single sprint where all work was accomplished, on time. This is despite us going through more than a couple of different methodologies for ensuring that our work is properly estimated and sprints loaded.

Quick tangent. I actual found that under-loading a sprint actually resulted in the team delivering fewer units of work than they did when there was much more assigned to them.

But I digress…

As I continue this journey I intend to analyze and evaluate each of those potential causes.

Challenges with consensus

Prior to me finding myself in charge of a multi-million dollar project, I was in charge of a small number of data architects. In these early days of my role, I noticed one major problem. The team was producing designs which followed completely different schools of thought but they were getting wedged into a single system. This made the system inconsistent and challenging to work with.

The team had put a lot of effort into coming together and making decisions collaboratively but had consistently found challenges. The external perspective was that each person on the team had mind-sets which were too far apart and each had enough strength and stubbornness to teach a statue some lessons.

As a facilitator of many of these discussions I quickly became displeased with even attempting to find consensus and began learning that having a single architect as lead was important on any product or project so that they could instill a certain degree of consistently, even if other architects were contributing to the design.

As mentioned in previous posts, I have read The Responsibility Virus by Roger Martin. In that book, I found the words which I felt but was having a lot of difficulty to express. As per Mr. Martin, this is the darker side of consensus.

  1. No consensus: when you just waste time because the parties involved stubbornly refuse to come to consensus and you take the penalty of whatever delay was caused.
  2. Bad consensus: when you make a bad decision because of ineffective use of all of the data and perspectives from the group. When the responsibility virus has caused groupthink where social pressures cause selective collusion.
  3. False consensus: when consensus appears to have been made but the under-responsible members of the group simply have not expressed their opposing views. This means that they will not have bought into the solution and their laziness or under-commitment to the plan ultimately undermines the entire solution.
  4. Weak consensus: when a consensus is made but without much commitment or ownership. These solutions tend to get revised or thrown away at the first sign of a challenge. This can occur when a decision is time boxed or demanded to be reached by a certain time.

My early team, and my current project team, has suffered from all four of these. One of my biggest frustrations is #4 and it is the most common problem that I have encountered. I cannot count how many times I wanted to scream into a pillow saying, “DO IT ONCE BEFORE YOU COMPLAIN ABOUT HOW HARD IT IS TO DO! JUST ONCE! GIVE IT ONE WEEK!”

I do not have a lot of solutions to this, yet. However, I find that just understanding these problems gives me the opportunity to recognize and realize that consensus can be good, I just need to build or find strategies for avoiding these four pitfalls.

Conflicting ladders of inference

One of the challenges when seeking consensus is when decisions are not or cannot be purely fact based. This is even harder when the decision comes down to a weighing of pros and cons because everyone brings their own scale to the table and none of them are calibrated the same.

This turns what should be decisions based on fact and empirical evidence into judgement calls. Maybe eight of the ten options were ruled out by fact but the finalists end up becoming a personal choice and therefore an argument.

This can occur because of conflicting ladders of inference. This is when two people infer in layers on top of the same data but come to conflicting conclusions. The biggest problem here is transparency and big picture reassessment.


When I look at a set of facts and begin layering inferences, I walk down a path which will lead to my opinion on a particular choice. You will do the same and we will debate our conclusions. To each of us our conclusion is rational and possibly even indisputable.

This is where enumerating assumptions becomes particularly valuable. This is because, by enumerating our assumptions, we can begin to reverse engineer our own ladders of inference and expose the logical steps that we took at each turn.

The goal is for one of us, or both, to revise our assumptions and/or conclusions as conclusions that we declare as fact are proven to be false. This also has the added benefit of helping us calibrate our scales for weighing pain points to be closer to each others’.


Often I find that a good, but not perfect, solution is presented early in any collaborative discussion. Then the group calls out suggestions or pain points and the solution will mold like clay which hasn’t met the kiln yet.

Incremental evolution of a solution is at serious risk of cascading criteria which become stale before anyone even leaves the meeting.

  1. You shift the plan to 1.0.1 because of pain point A.
  2. You enhance the plan to 1.0.2 because of benefit Z.
  3. Version 1.0.2 makes pain point A non-existent.
  4. Repeat steps 1-4 a few times.
  5. You end with version 1.2.13.

Now that you have designed solution 1.2.13 you walk out of the meeting with consensus and a design to move forward with. However, several times the solution shifted and no one ever re-assessed version 1.2.13’s validity against the original requirements and assumptions.

I have seen this failure to reassess the acceptance criteria and requirements cause many smart groups of people implement processes that cannot stand up to scrutiny at time of delivery. Suddenly they will realize that a mandatory requirement was missed or that the entire solution became far more complicated than it really needed to be.

Choice structuring framework

The choice structuring framework is another direct pull from our helpful Responsibility Virus text. It is designed to help us have a collaborative conversation which complies with these governing values of any social interaction.

  • To win and not lose in any interaction.
  • To always maintain control of the situation at hand.
  • To avoid embarrassment of any kind; and
  • To stay rational throughout.

A brief explanation of the framework is that you need to.

  1. Frame the choice
  2. Brainstorm options
  3. Express conditions which would need to be true in order for the option to remain valid
  4. Identify barriers
  5. Perform tests of the conditions and barriers which are open to public scrutiny

The intent here is to disconnect each contributor from the solutions or options that they present and to make fact base decisions with full transparency regarding how we are assessing each positive or negative factor.

  • By disconnecting the choice from the individual the individual cannot lose. Any option which is removed from consideration was never their option it was simply an option.
  • By disconnecting the choice from the individual no one will be embarrassed because any negativity is directed at a non-sentient entity, not them.
  • By identifying conditions, barriers, and scientific tests, everyone can stay rational.
  • By making sure that all tests are public and open to group scrutiny, everyone is able to maintain control and never feel as though some piece of the choice is out of their influence.

In the wild

Since learning this process I have had two meetings which had defined (framed) choices and I followed this structured process. Overall it was an extremely positive experience.

First note is that this takes a bit of time. In one case, I had relatively simple choices which had to be made and it took an hour and a half to get through the process with a cooperative group of six people. These choices were small enough that I believe that the hour and a half is about three times longer than the choices could have taken.

I find that this process’s benefit increases exponentially with the size of the decision. For example, large strategy decisions would have much greater gains than smaller scale decisions.

With that being said, I never saw a greater degree of buy-in and ownership come out of any of our previous meetings. One peer manager, who didn’t know I was conducting this experiment, immediately came to my desk and complemented the success of the meeting.

I feel that we achieved true consensus and I look forward to seeing the team embrace our decisions in a very meaningful way.

Series index

  1. The Collaboration Commission – Part 1
  2. The Collaboration Commission – Part 2
  3. The Collaboration Commission – Part 3
  4. The Collaboration Commission – Part 4
  5. The Collaboration Commission – Part 5






2 responses to “The Collaboration Commission – Part 2”

  1. […] part 2 of this series, I wrote about the need for design reassessment after walking down a path of incremental change. […]

Leave a Reply

%d bloggers like this: