CS2910 Symbolic AI Comprehensive Revision Guide
CS2910 Symbolic AI Complete Revision Guide (2024-25)
1. Search Algorithms
1.1 Uninformed Search
What is search?
Search algorithms help AI agents find solutions by exploring possible paths in a problem space. Imagine trying to navigate a maze - search algorithms systematically check different routes.
Key types:
Breadth-First Search (BFS): Explores all neighbors at current depth before moving deeper. Like checking every room on one floor before going upstairs.
- Properties: Complete (finds solution if exists), optimal for uniform costs, high memory usage
- Time Complexity: O(b^d) where b=branching factor, d=depth
Depth-First Search (DFS): Goes deep down one path before backtracking. Like following a corridor to its end before trying side doors.
- Properties: May not find shortest path, memory efficient
- Time Complexity: O(b^m) where m=max depth
Depth-Limited & Iterative Deepening: Combines BFS completeness with DFS memory efficiency by gradually increasing depth limits
Real-world analogy: Finding lost keys - BFS checks every room systematically, DFS checks entire first floor before others.
1.2 Informed Search
1.2.1 What is a Heuristic?
A heuristic is like a “smart guess” that helps search algorithms make better decisions. Imagine you’re lost in a city and want to reach a landmark. Instead of checking every possible street, you might use the landmark’s visible height as a clue to move closer. In AI search algorithms:
- Heuristic function
h(n)
: Estimates the cheapest path cost from noden
to the goal. - Example: In the Romania travel problem (from the slides),
h(n)
could be the straight-line distance from a city to Bucharest.
1.2.2 How Heuristics Improve Efficiency
1.2.2.1 Reduces Unnecessary Exploration
Without heuristics, algorithms like BFS/DFS blindly explore all paths. Heuristics act as a guide:
- Prioritizes nodes that seem closer to the goal.
- Avoids expanding obviously bad paths.
Analogy: If you’re searching for keys in your house, a heuristic would be “check places you last used them first” instead of searching every room randomly.
1.2.2.2 Key Algorithms Using Heuristics
Greedy Best-First Search
- Rule: Always pick the node with the smallest
h(n)
(closest to goal estimate). - Pros: Fast, finds solutions quickly.
- Cons:
- Not optimal (might pick a longer path if the heuristic is misleading).
- Can get stuck in loops (e.g., going back-and-forth between two cities).
A* Search
- Rule: Uses
f(n) = g(n) + h(n)
, where:g(n)
= actual cost from start to noden
h(n)
= estimated cost fromn
to goal
- Pros:
- Optimal if
h(n)
is admissible (never overestimates true cost). - Complete (guaranteed to find a solution if one exists).
- Optimal if
- Example: In the Romania problem, A* would choose paths balancing actual road distance (
g(n)
) and straight-line distance to Bucharest (h(n)
).
1.2.3 Why Heuristics Work: Admissibility & Consistency
- Admissible Heuristic:
h(n) ≤ actual cost to goal
.- Example: Straight-line distance ≤ actual road distance.
- Consistent Heuristic:
h(n) ≤ cost(n to n') + h(n')
for all successorsn'
.- Ensures A* never re-expands nodes (efficient).
1.2.4 Real-World Impact of Heuristics
Algorithm | Time Complexity | Space Complexity | Optimal? | Complete? |
---|---|---|---|---|
Greedy Search | O(b^m) | O(b^m) | ❌ | ❌* |
A* Search | O(b^d) | O(b^d) | ✔️ | ✔️ |
*Greedy search becomes complete with loop-checking in finite spaces.
Example from Slides:
In the Romania map:
- Greedy might choose Arad → Sibiu → Fagaras → Bucharest (total cost 450), ignoring a cheaper path like Arad → Sibiu → Rimnicu → Pitesti → Bucharest (total cost 418).
- A* would find the cheaper path because it balances
g(n)
(actual cost) andh(n)
.
1.2.5 Why This Matters for Problem-Solving
- Speed: Heuristics cut down search time dramatically. For example, A* with a good heuristic explores far fewer nodes than BFS.
- Optimality: A* guarantees the shortest path if the heuristic is admissible.
- Memory: Reduces the number of nodes stored in memory compared to uninformed searches.
Case Study (From Exam Paper):
When comparing greedy vs A* on two graphs (Fig. 1):
- Greedy might pick a path with lower
h(n)
but higher actual cost. - A* always picks the path with the lowest combined
g(n)+h(n)
, ensuring optimality.
Summary for Exam Prep
- Heuristic Function: A problem-specific “hint” to guide search.
- Greedy Search: Fast but risky (not optimal, might loop).
- A Search*: Optimal and efficient with admissible heuristics.
- Key Formulas:
f(n) = g(n) + h(n)
(A*)- Admissibility:
h(n) ≤ actual cost
- Consistency:
h(n) ≤ cost(n→n') + h(n')
Exam Tip: If asked to justify why a heuristic is admissible (e.g., in Fig. 2), verify that
h(n)
≤ actual cost for all nodes using the graph data.
2. Logical Reasoning
2.1 Propositional Logic
Basic Components:
- Atoms: Basic facts (P, Q)
- Connectives: AND(∧), OR(∨), NOT(¬), IMPLIES(→)
- Truth Tables: Define meaning of operators
Example:
“If it rains ($R$), the match cancels ($C$)”:
$$R → C$$
2.2 First-Order Logic (FOL)
Extends propositional logic with:
- Variables: x, y
- Quantifiers: ∀ (all), ∃ (exists)
- Predicates: Express relationships (e.g., Father(x,y))
- Functions: Transform inputs (e.g., Successor(x))
Example:
“All humans are mortal”:
∀x Human(x) → Mortal(x)
2.3 Inference Methods
Resolution: Combines clauses to derive new ones
1 |
|
Unification: Makes predicates match by substituting variables
Unify Parent(x, Bob) and Parent(Alice, y) gives x=Alice, y=Bob
Forward Chaining: Start with facts, apply rules until goal found
Backward Chaining: Start with goal, work backwards to facts
3. Planning & Temporal Reasoning
3.1. What is Planning?
Planning is the process of finding a sequence of actions that transforms an initial state to a desired goal state. It’s like creating a recipe for an AI agent to follow.
Key Components:
- Initial State: Starting conditions (e.g., robot location, box positions)
- Goal State: Desired outcome (e.g., boxes gathered together)
- Operators: Actions that change states (e.g., push, move)
3.2 What is Temporal Reasoning?
Temporal Reasoning deals with how states change over time and what remains unchanged. It answers questions like:
- What changes when I perform an action?
- What stays the same?
- How do actions affect future possibilities?
3.3 Robot Box-Moving Example
Scenario Setup (From STRIPS Documentation):
Initial State:
1
2
3
4Robot in ROOM1 at position e
BOX1 at position a
BOX2 at position b
BOX3 at position cGoal:
1
NEXTTO(BOX1, BOX2) ∧ NEXTTO(BOX2, BOX3)
Planning Process:
Operator Selection (From STRIPS Operators):
goto2(m)
: Move robot next to object mpushto(m,n)
: Push object m next to object n
Plan Generation:
1
2
3
41. goto2(BOX2) # Move robot to BOX2
2. pushto(BOX2, BOX1) # Push BOX2 next to BOX1
3. goto2(BOX3) # Move robot to BOX3
4. pushto(BOX3, BOX2) # Push BOX3 next to BOX2
Temporal Reasoning Challenges:
Frame Problem:
- When pushing BOX2, we need to explicitly state:
- BOX2’s position changes
- Robot’s position changes
- Other boxes’ positions remain unchanged
- When pushing BOX2, we need to explicitly state:
State Transitions:
After
pushto(BOX2, BOX1)
:1
2
3
4NEW STATE:
NEXTTO(BOX2, BOX1)
Robot next to BOX2
BOX3 remains at position c
3.4 Key Techniques Used
Situation Calculus (From Topic 4.1):
Represents states as sequences of actions:
1
State3 = do(pushto(BOX2,BOX1), do(goto2(BOX2), InitialState))
Uses successor-state axioms:
1
2
3∀s [NEXTTO(m,n,do(a,s)) ↔
(a=pushto(m,n)) ∨
(NEXTTO(m,n,s) ∧ ¬(a=remove(m,n)))]
STRIPS Planning (From Topic 5):
Operator Format:
1
2
3Operator: pushto(m,n)
Preconditions: NEXTTO(ROBOT,m) ∧ PUSHABLE(m)
Effects: NEXTTO(m,n) ∧ NEXTTO(n,m)Planning Graph:
Creates layers of possible actions and states to find shortest path to goal.
3.5 Why This Matters
- Planning ensures the robot finds the most efficient sequence of actions
- Temporal Reasoning prevents errors like:
- Assuming unrelated objects move when pushing a box
- Forgetting the robot’s new position after movement
- Overlooking prerequisites for actions (e.g., needing to be next to a box to push it)
Real-World Impact: These techniques enable warehouse robots to:
- Plan optimal paths for moving goods
- Track package locations accurately
- Avoid unintended side-effects during operations
4. Machine Learning in Symbolic AI
4.1 Decision Trees
Construction Process:
- Select most informative attribute (Information Gain)
- Split dataset recursively
- Stop when pure leaf or criteria met
Example:
Predicting loan approval:
Income > $50k?
├─ Yes: Credit Score > 700?
│ ├─ Yes: Approve
│ └─ No: Reject
└─ No: Reject
4.2 Inductive Logic Programming (ILP)
Combines logic with learning:
- Start with background knowledge
- Generalize from examples
- Generate Horn clauses
Example:
Learning family relations from:
Father(John, Bob), Mother(Mary, Bob) → Parent(John, Bob), Parent(Mary, Bob)
5. Prolog Programming
5.1 Key Features
- Declarative: Describe what to solve, not how
- Unification: Pattern matching variables
- Backtracking: Automatically tries alternatives
Example Program:
1 |
|
5.2 Execution Model
- Query Processing: Matches head predicates
- Depth-First Search: Explores rules sequentially
- Cut Operator (!): Controls backtracking
6. Historical Foundations
6.1 Turing Test
Key Idea: Machine intelligence demonstrated if human can’t distinguish it from human in conversation.
Components:
- Interrogator
- Machine
- Human control
6.2 McCulloch-Pitts Neuron
First mathematical neuron model:
- Binary activation (0/1)
- Weighted inputs
- Threshold function
Formula:
Output = 1 if Σ(inputs × weights) ≥ threshold
0 otherwise
7. Exam Strategy
7.1 Time Management
- 1.5 minutes per mark
- Answer known questions first
- Use bullet points for clarity
7.2 Practice Techniques
- Manual resolution proofs
- A* heuristic calculations
- Decision tree construction
7.3 Common Pitfalls
- Forgetting frame axioms in planning
- Misapplying quantifier scope
- Overlooking edge cases in search algorithms
Remember: Symbolic AI emphasizes explicit knowledge representation and logical reasoning. While modern AI uses neural networks, understanding these foundations is crucial for hybrid systems and explainable AI.