Did you ever have to finally decide?

It is decision time! Our artificial intelligence system has gone half way through the OODA loop.  Whether it is a living system, or a computer based system, it has observed the environment, it has evaluated the situation and oriented itself in the decision space. And now it is time to settle on a plan, a decision.

It is decision time for the AI. Thefacts have been gathered, the landscape has been explored. Now it is time to choose a plan or behavior.
It is decision time for the AI. The facts have been gathered, the landscape has been explored. Now it is time to choose a plan or behavior.

The decision process is one of comparing the world as it is to the world as we want it to be and selecting actions or behaviors that will bring the system closer to the desired state.

Maybe you are one of the people who look at every aspect of a situation before you make up your mind. You like lists and spreadsheets. You lay out plans with contingencies – if I do this, then this might happen, so then I’ll do this, and… This is one way that people make decisions. But there is another. You are the people who look at the situation and let your sub-conscious take over. No detailed lists for you. You just get a handle on the whole situation and the right course of action springs full-born from your brow.

These are the two main approaches to decision making used by researchers in artificial intelligence.  One is a symbolic,deliberative approach, the other is a non-symbolic, pattern based approach.

computerchess-sm-gryLet’s use playing chess as an example. Chess had been for decades the defining test for artificial intelligence. The idea was that chess is hard, chess takes complex analysis, only really intelligent people play chess well.  When a computer can defeat a chess grand master – that will demonstrate artificial intelligence. Of course, you can now buy a computer chess program that will give the best chess players a run for their money, for under a hundred dollars.

Deliberative Approach

The deliberative approach to decision making is based on a model of looking at every possible outcome, one step at  time, and eliminating the ones that don’t work well. This is the “If I do this, then you’ll do that , then I can do…” approach.  It requires that the goals and the current situation be represented in some symbolic manner, and that the system explore all the possible outcomes of all the possible actions.  The big drawback is that in even a simple situation there may be millions or billions of possible action sequences.  So it requires both large computers, and lots of time to explore the possible outcomes.  This has worked in chess – Deep Blue by IBM played against Garry Kasparov and won a six game match in 1997. But Deep blue was an immense super computer with dedicated hardware designed to play chess, and only chess.

Can you imagine trying to do this in order to successfully drive a car in rush hour traffic? You can decide what route to take with this approach (There’s construction on Broadway, so I’ll cut over to …) but not for how to avoid the guy veering into your lane.  For that you need a different approach to AI.

Patterns and reactions

We talked about an alternate decision making process that some people use – the non-verbal sub-conscious approach. There is an equivalent for artificial intelligence systems – pattern based and reactive decision making.  These are the neural nets, and fuzzy logic systems.  Rather than working through possibilities step by step, the look for patterns in the situation, and then apply the solution that has worked in the past in similar situations.

This is similar to what a driver does when they see brake lights in the traffic ahead. They don’t need to analyse every possible interaction of the cars are the road; they see a pattern (brake lights) and the reuse the solution (I should slow down).  This give the driver the ability to react to the rapidly changing traffic conditions and achieve their goal (arrive safely at home)

The same thing is true with many chess players (both computer and human) – they build up libraries of how to open a game or how to finish the end game.  And for many chess grand masters, this is true in the middle of the game as well. Often, when asked about “How many moves do you look ahead?” they say “None, I just get a sense of the right move, and decide to take that move.

This is from an interview with top chess player Magnus Carlsen from Time Magazine:

Your coach, former world champion Garry Kasparov, says your strength is not calculation, but rather your ability to intuit the right moves, even if their ultimate purpose is not clear. Is that right?
I’m good at sensing the nature of the position and where I should put my pieces. You have to choose the move that feels right sometimes; that’s what intuition is. It’s very hard to explain.

So, in the end, this step of the OODA loop really does come down to simply deciding what to do next.

Next up: Caught in the Act!

Warehouse fire in downtown. Could this affect the response to insurance ads? What about real estate?

Artificial Intelligence on the Orient express

The Orient step of the OODA Loop. Once you know what is out there (Observe) you can figure out what it means.
The Orient step of the OODA Loop. Once you know what is out there (Observe) you can figure out what it means.

As we continue through the OODA loop, we arrive at the Orient step. For a quick review of the OODA loop (see “What the heck is an OODA Loop?“) The previous step was “Observe” where the intelligent agent (human or otherwise) learns what the things are in the world around the system that can influence the choices for appropriate behavior.

So, having completed the Observe step, our agent knows what it has around it and it can start to explore what the impact of those things will be.  As an example, think of an artificial intelligence that is trying to increase the ad revenue on a web site.  In this example the context is not visual, it is the competitive landscape. This landscape might be made up of:

  • the nature of the queries that brought people to the site,
  • the type of referring site,
  • the demographics of those visitors,
  • the semantics of the page that they landed on,
  • current trends  on Twitter, Facebook, and
  • other business drivers.

In addition, the A.I. has access to its own internal data – what ads have been successful, what the trends are.

Warehouse fire in downtown. Could this affect the response to insurance ads? What about real estate?
Warehouse fire in downtown. Could this affect the response to insurance ads? What about real estate?

The Orient step can be thought of as building a mental model of all the salient factors that will influence the decision to be made. Is the rising stock market a trend? If so, maybe ads related to investments would be a good choice to display to the visitor.

Is a big fire downtown in the news? Maybe serve up those insurance review ads.  Or will this discourage people from thinking about buying a warehouse, so the normal real estate ads should be dropped in favor of something else. Or is it possible that the combination or perceived real estate risk and the rising stock market will cause people to think about selling their warehouse and investing in stocks?

At this stage,the system is not making the decision, it is just getting the ‘lay of the land’ it is building out a model of the landscape on which the decision will be made. It is orienting the system in the decision space. This is the key process of building up a representation of the context in which the system must act intelligently. See “Context is Key.” and a great slide deck put together by Thei Geurts on Contextual Intelligence.

This step of orienting is perhaps the most critical – since if you as a decision maker, or an A.I. with a goal, builds a bad mental model, if an inaccurate context is created, it does not matter how good the brains are at thinking things through – garbage in, garbage out.

Next up “Did you ever have to finally decide?”

Stop to sense the roses

It is easy to forget to pay attention to the small things that make up our world. In the hectic daily routine, we can overlook the simple things.

Stop to sense the roses
Stop to sense the roses

And if that is true for people, it is also true for the systems and machines that surround us. After all, why would anyone program a robot to look at flowers?  But that is exactly what an A.I. system needs to do to be able to function intelligently in the world.  Okay maybe not flowers specifically, but if the system is going to behave intelligently, it needs to be aware of the world around it, and the effects of its behavior.

This need to act within a context is a key component of the OODA Loop (for an overview see “What the heck is an OODA Loop?“)  To understand that context the system (whether a person or a machine) needs to observe the environment in which it is acting.  This can be as simple as scanning a chess board for the current locations of the pieces, or as complex as scanning 360 degrees around a vehicle as it hurtles down a highway in heavy traffic.  The first step is to Observe.

Step 1 of the OODA Loop: Observe the context to build a model of the current situation.
Step 1 of the OODA Loop: Observe the context to build a model of the current situation.

Observation is not as simple as grabbing an image from a camera. Collecting those pixels is part of the process but knowing that there is something blue off to your left won’t help you make an intelligent decision all by itself.  As an example, let’s think about the morning commute. Imagine that you are in your car on the highway. But all the windows have been blacked out – you can’t see a thing!  Clearly this will not end well.

Now you look down at the dashboard and you see a screen. A circular green screen, like an old style radar display, and scattered across the display are thousands of little dots dancing and moving.  You know that each dot represents a reflection of a laser beam from an object, but you have no idea what the object might be.

Radar image - also the same type of data that robots get from LIDAR systems.
Radar image – also the same type of data that robots get from LIDAR systems.

As your car hurtles down the highway, you start to see patterns in the dots – some of them seem to move as a group falling behind, or moving forward together.  Other groups appear at the top of the screen, and rapidly move to the bottom and disappear.  Got the picture?  Now, ask yourself – “How long before I crash?”

This kind of information is the base level for the Observe step of the OODA loop, but it is clearly not sufficient by itself. As you drive down the road, you need more than a cloud of points(1). What you need to observe are things like cars and pedestrians, and road signs, not just green dots.

So the second level of the Observe step is to classify the raw sensory data into objects that you can reason about.  As a human you get the processing for free – you don’t even know that it happens.  A big part of your brain is dedicated to taking the nerve impulses from your eyes and analyzing, processing, and categorizing them into things like cars and trucks, people and dogs, flowers and trees (Here is a nice article about how that processing works). And another large part of your brain is dedicated to sound – screeching brakes, sirens, or just the sounds on an engine rev’ing up just as you apply the brakes to stop at an intersection.

Once you have these symbolic representations, a model that says there is a blue Fiat 500 in the lane next to you, and the sign that tells you that your turn is coming up on the right; once you have done all this pre-processing work, then the Observe step is done for this pass through the OODA loop. You now have the beginnings of a model that you can use to make intelligent decisions.

Driver's view of traffic. You see cars and pedestrians, not pixels and lines.
Driver’s view of traffic. You see cars and pedestrians, not pixels and lines.

Next Up – Artificial Intelligence on the Orient Express


1) By the way, this ‘point cloud’ is what is gathered by the LIDAR laser scanners used by the self-driving cars as a major sensor system.