Who knows more about risk?

The Wisdom of the CrowdsIn important matters of uncertainty involving predictions we need to hear from a diverse group of people that have different perspectives, and perhaps a stake in the decision that causes them to think before offering their views.

Collectively, the diverse group is smarter than individuals. This is the core message of The Wisdom of the Crowds by James Surowiecki.

Surowiecki provides a framework with examples of when and how the knowledge of the crowd can impact decisions positively.

This idea applies well to regulation. Why? Because regulators are generally a step or two removed from the uncertainties they need to know about. First-hand knowledge about these risks reside with the regulated parties.

As well, the sources of uncertainty in most jurisdictions are dispersed and mostly invisible until they are brought to the attention of the regulator. The collective knowledge of constituents (regulatees and individuals who are affected by what regulatees do) is far greater than the regulator. Effective new methods for capturing this knowledge will improve regulatory oversight.

Continue reading


Faster, more accurate decisions

DNA model
Many years ago we would never have thought that there is code that determines who we are. Now we understand that DNA is the ultimate example of structured information.
Thin slices of related information paint a composite picture

The way the human the brain serves up the right information in a split second is truly inspiring. How do you explain now retired Magic Johnson’s court sense and ability to float through a seemingly impossible line of guards and then sink a basket making it look like poetry in motion? How do you account for Wayne Gretzky’s renowned ability to anticipate where the puck will be at a critical moment?

Malcolm Gladwell in Blink uses examples like these to illustrate the brain’s capability to process thin slices of related information and make fast and accurate decisions. “Blink” is Gladwell’s metaphor for split second decision making.

This capability to thin slice accurately is based an acute understanding of what’s important and what’s not. The composite picture that the thin slice represents may include seemingly trivial information and exclude other information that just doesn’t fit. It’s the right bits of information and relationships between them that matters.

The concept applied to regulatory organizations, and in particular to the development of risk profiles, can be very effective. Continue reading

Do we need to autheticate the source or what the source reports?

I spoke at a conference in Ottawa today that was focused on the topic of stakeholder consultation. An observation from one of the participants in one of the sessions raised the question about how to authenticate and validate comments from participants received via an online forum, wiki, or blog. If the comments are to be used to inform policy, the need to know the author of a comment is clearly important. If one individual or group were able to represent themselves as many individuals or groups, the consultation process is clearly suspect.
This brings up the question of the purpose of the consultation. In risk management, we need to consult to understand matters that would otherwise be invisible, to see and understand more clearly what and how certain unexpected events could happen that will advance or retard goals. The authenticity of the individual who helps you understand or see more clearly may not be important. However, the truth of what they tell you is. So we need to validate the information. It’s less likely that we need to validate the reporter. Indeed, knowing less about the reporter means that it is less likely that we will be influenced by the things that shouldn’t influence us, but easily can. For more on this point about risk and perception read this blog about Joshua Bell.

Conversations create knowledge and help manage risks

Regulations capture knowledge at a point in time

Intensive research and analysis contribute to the substantial body of knowledge that’s developed when creating or updating regulation. It involves ideas, trade offs, drafting and revising documents – for months or, more likely, years. The resulting legislation, regulations, policies, standards and programs capture all of this distilled knowledge at a point in time. But this is just the beginning …

Risks evolve faster than rules

Issues shift and risks change. Moving forward, there are endless uncertainties about the future. It doesn’t help that rules are static. Continue reading

NANOTECH, The new industrial revolution. HOW WILL REGULATORS RESPOND?

Nanotech is shaping up to be a major, fast-moving regulatory challenge that will have an impact not only on manufacturers and developers of products but on those who regulate professions and occupations that will use these nano-enhanced products.

Ever since the discovery of atoms scientists have always wanted to manipulate them. Nanotechnology takes that ability to a new plane using techniques that manipulate substances at the atomic and molecular level to make structures in the nanometer (nm) range (a billionth of a metre or 1/80,000 the width of a human hair).

Working at this scale allows scientists to “tune” material properties and make them behave in different ways to normal, large scale solids. For example, carbon in pencils is soft and malleable but at the nano scale can be as hard as steel. Continue reading

Why do we use random audits?

Dart board

Random audits or inspections have long history in regulation. However, given the pressure on regulators today to stop bad things from happening, is it time to question this practice? So why are random audits used? The apparent reasons include:

  • Efficiency, auditing every situation is not possible
  • The threat of an audit keeps everyone honest
  • They’re fair; no prejudice determines who gets audited

Are these reasons really valid? Let’s consider them. Continue reading

Risk tools help assess terrorist threat


New Jersey Port
New York harbor has become scarier since 9/11 with the emergent threat of Al Qaeda terrorists using the port to smuggle in a nuclear weapon in a container or sinking a ship in the harbour cutting off fuel supplies to the eastern seaboard.

The Harbor Commission’s response is a case study in the use of risk assessment tools for identifying and analyzing risks in logical systematic ways, tools that complement rather than replace the experience of seasoned inspectors and investigators, helping regulatory staff differentiate between shipments that seem to be deserving of equal attention but are not.


William Finnegan, in a recent article in the New Yorker Magazine, outlines the response of the Harbor Commission to the terrorist threat and illustrates some of the benefits of using risk assessment tools, which include:

– Faster decision making

– More clarity and logic to support decisions

– Better communication about decisions

– Improved consistency and fairness

– Improved use of investigator resources (costly investigations can be targeted to address the most serious threats)

– Improved use of information management resources (determining what information needs to be collected and reported for effective decision making)


“New York harbor is vast, antiquated, Mob-ridden and notoriously hard to police. With 5,300 foreign boats docking there each year, how easily could Al Qaeda buy its way in?” Finnegan asks.

Prior to 9/11, threats originated with the Mob. In risk terms, these threats were of relatively low consequence and frequent. The Mob wanted to perpetuate its business. To do so, most of its activities had to fly below the line of risk tolerance. If the negative consequences associated with its activities became too great, law enforcement would step in and “business as usual” would become more difficult.

After 9/11, with the threat of terrorism, the Commission has recognized that it must have a risk strategy to deal with the low probability of a high consequence event, a threat that must be mitigated because the consequence is intolerable.


The Commission chose a risk assessment tool developed by the National Targeting Centre in Virginia. It’s a tool with 3,000 rules for evaluating data. Finnegan quotes a former port official, “If somebody wants to try something, they have to be very careful to avoid any historical anomaly. We have 20 years of entry data. We know how much a hundred bales of cotton should weigh. Our targeting system will tell us if this is the eight thousandth time we’ve seen this commodity from this importer from this port – but this time there is something different about it. Each rule in the targeting system has a derogatory weight to it. So if this is an unusual commodity to be coming from – paint from Syria, say – then that’s points. Points pile up to trigger an exam [before the ship gets to the harbor.]”

Prior to the implementation of this risk assessment tool, when the Commission was dealing with the frequent but low consequence type threats, local officers inspected ships and containers in the port and made a call based on their experience and observation. Now everything goes through this targeting system. The Harbor Commission is able to direct its inspection and enforcement efforts in a way that corresponds with the priorities of a changed world.

Finnegan’s article can be found in the June 2006 issue of the New Yorker. If you are a regulator who manages risk, you may find it interesting reading.