Skip to main content

Subsection 7.6.2 Nonmonotonic Reasoning

So far, the reasoning system that we’ve been studying has the following key property:

  • As we add premises, it is possible (in fact, likely) that new statements will become theorems.

  • As we add premises, no old theorems get eliminated. In other words, if it was possible to prove P before adding the new premise(s), it is still possible to prove it.

Thus we’ll say that our reasoning system is monotonic. The set of provable claims can change in only one (thus the prefix “mono-”) direction as we add premises (in this case, it can only grow).

But not all everyday reasoning has that property. In particular, whenever we make do with incomplete information by exploiting some kind of default fact (for example, assume it’s not raining in Austin unless you know otherwise), we must be prepared to undo some conclusions if new information (for example, that it is actually raining in Austin) comes in.

In default reasoning, we conclude some fact in the absence of some indication that we should do otherwise.

In the system we already have, we can write:

[3] ∀x ((P(x) ∧ ¬Q(x)) → C(x))

But, to use [3] to conclude C(x), we must actually be able to prove ¬Q(x). What if we simply don’t know anything about ¬Q(x)? We’re stuck.

But suppose we could write:

[4] ∀x (P(x) → C(x) UNLESS Q(x))

We w ant to interpret this to mean that we can conclude C(x) unless we have explicit information that Q(x) is true. Of course, if new information about Q(x) shows up, we’ll have to add it to our system. We’ll also have to eliminate any conclusions that we derived based on its absence.

Consider the following claim that might be useful in trying to solve the problem of getting to the airport:

CanDriveToAirport UNLESS (\(\neg\)CarWillStart \(\vee\)

FlatTire \(\vee\)

FireTrucksBlockingDirveway \(\vee\)

StreetsCoveredInIce \(\vee\)

CityLockedDownInEmergency )

Notice that, in everyday planning, we are rarely even conscious of all the terms in the UNLESS clause. If we wanted to build a problem-solving robot (perhaps even a self-driving car), we’d need it also to plan an action without having to stop first and verify that all of the UNLESS terms are in fact false.

Our inference system can’t do this. But there exist formal systems that can. And, of course, people do it all the time.

Big Idea

We use default reasoning to enable us to reason about typical situations without getting completely bogged down worrying about all the unlikely things that might occur. People do this all the time. Intelligent agents and robots will also have to be able to do it.

Exercises Exercises

1.

Consider the statement:

[1] ∀x ((Room(x) ∧ ∃y (Lamp(y) ∧ In(y, x) ∧ TurnedOn(y))) → Lit(x))

If there’s a lamp in the room and the lamp is turned on, then the room will be lit. This fact could be useful, for example, to a household robot. It suggests that, upon entering a dark room, a reasonable thing to do would be to turn on a lamp.

But what we really should have written here is:

[2] ∀x ((Room(x) ∧ ∃y (Lamp(y) ∧ In(y, x) ∧ TurnedOn(y))) → Lit(x) UNLESS ( …) )

In the real world, many things could go wrong and prevent the room from lighting up when the lamp is turned on.

List at least three such things.

Answer.

Solution.

Explanation: Some things that could go wrong are:

  • Lamp bulb could be burned out.

  • Lamp could be unplugged.

  • Power to the house could be cut off.

  • Lamp itself could be broken.

But note that neither you nor our robot explicitly sets out to verify that each of these things is false before turning the lamp on. It’s only if, after turning it on, it doesn’t work, that we start looking for a reason why.

2.

Consider the statement:

[1] ∀x ((Warehouse(x) ∧ Unlocked(x)) → CanHideIn(x))

If you need to hide someplace and there’s a warehouse nearby, you can hide there. This fact could be useful, for example, to an agent in a first person shooter game.

But what we really should have written here is:

[2] ∀x ((Warehouse(x) ∧ Unlocked(x)) → CanHideIn(x) UNLESS ( …) )

In the real world, many (generally very low probability) things could prevent it being possible to hide in a warehouse.

List at least three such things.

Answer.

Solution.

Explanation: Some things that would prevent hiding are:

  • The warehouse is already packed to the gills.

  • The warehouse is flooded with water above your head.

  • The warehouse is on fire.

  • Someone has littered the warehouse floor with spikes.

  • There are a couple of vicious dogs inside the warehouse.

  • The warehouse is already full of enemies.

But note that neither you nor a game agent would explicitly set out to verify that each of these things is false before running inside.

Inheritance

Recall that we’ve already considered the issue of the flying capabilities of birds. We might be tempted to say:

[1] x (Bird(x)  CanFly(x))

And, if we did that, we’d be right most of the time. But emus and penguins can’t fly. Neither can birds that have just been born or ones with crude oil on their wings or ones with broken wings. What we really want to say here is something like:

[2] x (Bird(x)  CanFly(x) UNLESS (Emu(x)  Penguin(x)  Baby(x)  WingBroken(x)  … ) )

But we want [2] to function like [1] most of the time. If I know that x is a bird and I know nothing else, I want to assume that it can fly.

The flying birds situation is an example of a very common sort of default reasoning in which individuals are assumed to inherit (take on) the properties of a typical member of some class to which they belong. Birds typically fly. Dogs typically have tails. Houses typically have kitchens. People typically can talk.

As with any sort of nonmonotonic reasoning, we must be prepared to undo conclusions if new information (such as the fact that our bird has a broken wing) comes in.

Exercises Exercises

1.

We might be tempted to say that anyone who is a friend inherits the property of being trustworthy:

[1] ∀x (Friend(x) → CanBeTrusted(x))

If we did that, we’d be right most of the time. But there are exceptions. Suppose that we’re trying to program a game agent. We might want to say:

[2] ∀x (Friend(x) → CanBeTrusted(x) UNLESS ( … ) )

Think of at least three claims that could go inside the UNLESS clause.

Answer.

Solution.

Explanation: Some things that would prevent a friend from being trustworthy are:

  • x is being pursued by an enemy and must do whatever is necessary not to get killed right now.

  • x has been shot and realizes that moving or doing anything would be fatal.

  • x believes that you have betrayed him/her/it.

  • x has come under a spell cast by a mutual enemy.