CS2910 Exam topics with example questions and solutions

The revision slides explicitly mention:

  • “Past paper from May 2024: Discussed A* search, heuristic function (Slides Topic2.3 to recall informed search)”
  • “Past paper from Aug. 2023: Discussed adversarial search (SlidesTopic2.4 to recall MINIMAX and $\alpha$-$\beta$ pruning)”

From CS2910-24.pdf (May 2024 Paper):

Question 1: Best-First Search

(a) Briefly discuss the idea of the evaluation function in best-first search and explain how it affects the selection of which node to explore next.
Answer: In best-first search, the evaluation function, $f(n)$, estimates the “desirability” or “cost” of expanding a given node $n$.

(b) Define equations based on a heuristic function for (i) greedy search and (ii) $A^{*}$ search. Briefly explain all functions you use.
Answer:
Let $h(n)$ be the heuristic function, which estimates the cost from node $n$ to the goal state. Let $g(n)$ be the cost from the initial state to node $n$.
(i) Greedy Search: $f(n) = h(n)$
* $f(n)$: The evaluation function for node $n$, estimating the cost from $n$ to the goal.
* $h(n)$: The heuristic function, estimating the cost from the current node $n$ to the closest goal state. Greedy search prioritizes nodes based solely on this estimate, ignoring the cost already incurred to reach $n$.
(ii) $A^{*}$ Search: $f(n) = g(n) + h(n)$
* $f(n)$: The evaluation function for node $n$, estimating the total cost of the path from the initial state, through $n$, to the goal.
* $g(n)$: The actual cost incurred from the initial state to node $n$.
* $h(n)$: The heuristic function, estimating the cost from $n$ to the closest goal state. $A^{*}$ balances the cost incurred to reach the current node with the estimated cost to the goal.

(c) In both graphs of Fig. 1, A denotes the initial state, D denotes the goal and a number next to an edge denotes the cost of moving from a node to another. Compare the solutions’ cost found when applying the greedy search algorithm on the graphs to explain why greedy search is not optimal.
Answer: Greedy search is not optimal because it only considers the heuristic estimate ($h(n)$) and completely ignores the actual path cost from the start state ($g(n)$). This “myopic” approach can lead it to choose a path that appears promising locally (low $h(n)$) but results in a higher total path cost. For example, if path A has a high $g(n)$ but low $h(n)$, and path B has a low $g(n)$ but slightly higher $h(n)$, greedy might choose A, leading to a suboptimal solution, whereas A* (which considers $g(n)+h(n)$) would find the optimal path. The provided graphs (Figure 1 in CS2910-24.pdf) may not perfectly illustrate sub-optimality with greedy search, but the principle holds for why it’s not guaranteed to be optimal.

(d) In the context of the $A^{*}$ algorithm explain what an admissible heuristic function is.
Answer: An admissible heuristic function $h(n)$ for the $A^{*}$ algorithm is one that never overestimates the true cost to reach the goal from node $n$.

(e) In order to apply $A^{*}$ search on the graph of Fig. 2, using A as the initial state and E as the goal, we need an admissible heuristic function. Is the function specified by the table below admissible? Justify your answer.
Answer:
To check admissibility, we compare the given heuristic values $H(n)$ with the true optimal costs $h^*(n)$ to goal E from Figure 2 (CS2910-24.pdf).

True optimal costs ($h^*(n)$) to E:

  • $h^*(E) = 0$
  • $h^*(D) = 3$ (D $\rightarrow$ E)
  • $h^*(C) = 2+3 = 5$ (C $\rightarrow$ D $\rightarrow$ E)
  • $h^*(B) = \min(5+3, 12) = 8$ (B $\rightarrow$ D $\rightarrow$ E vs B $\rightarrow$ E)
  • $h^*(A) = \min(1+8, 4+5) = 9$ (A $\rightarrow$ B $\rightarrow$ D $\rightarrow$ E vs A $\rightarrow$ C $\rightarrow$ D $\rightarrow$ E)

Comparison:

State Given H True $h^*$ Admissible? ($H \le h^*$)
A 7 9 Yes ($7 \le 9$)
B 6 8 Yes ($6 \le 8$)
C 2 5 Yes ($2 \le 5$)
D 1 3 Yes ($1 \le 3$)
E 0 0 Yes ($0 \le 0$)
Since $H(n) \le h^*(n)$ for all nodes, the heuristic function is admissible.

From CS2910-23.pdf (Aug. 2023 Paper):

Question 1. (d) This question is about adversarial search. This question is about game theory for search problems, the optimal path through game trees and $\alpha-\beta$ pruning.
(d) Consider the game tree of Fig. 4. Redraw the above tree showing the values of non-leaf nodes after you apply the MINIMAX algorithm. Using these MINIMAX values, explain the solution of the search.
(Figure 4 is not provided in the uploaded snippet of CS2910-23.pdf, so I will provide a general explanation of MINIMAX).
Answer (General MINIMAX Explanation):
MINIMAX is an algorithm used in adversarial search (game trees) to determine the optimal move for a player, assuming the opponent also plays optimally.

  1. Utility Assignment: Start at the leaf nodes (terminal states) and assign their utility values from the perspective of the maximizing player.
  2. Value Propagation:
    • For a MAX node, its value is the maximum of the values of its children. The MAX player chooses the action that leads to the highest utility.
    • For a MIN node, its value is the minimum of the values of its children. The MIN player chooses the action that leads to the lowest utility for the MAX player.
  3. Optimal Move: The optimal move from the root node for the MAX player is the one leading to the child with the highest MINIMAX value.

(e) Can you obtain any optimization if you apply $\alpha-\beta$ pruning to the tree in Fig. 4. If yes, specify these optimizations and show where they occur by re-drawing the tree in Fig. 4. If no, explain why.
(Figure 4 is not provided in the uploaded snippet of CS2910-23.pdf, so I will provide a general explanation of $\alpha-\beta$ pruning).
Answer (General $\alpha-\beta$ Pruning Explanation):
$\alpha-\beta$ pruning is an optimization technique for MINIMAX that eliminates branches of the search tree that do not need to be explored.

  • $\alpha$ (alpha): The highest value seen so far for a MAX node on the current path from the root.
  • $\beta$ (beta): The lowest value seen so far for a MIN node on the current path from the root.
  • Pruning Conditions:
    • If $\alpha \ge \beta$ at a MIN node, the remaining children of that MIN node can be pruned because the MAX ancestor will never choose this path (it already has a better alternative).
    • If $\beta \le \alpha$ at a MAX node, the remaining children of that MAX node can be pruned because the MIN ancestor will never allow the play to reach this point (it already has a worse alternative for MAX).
      Optimizations occur by skipping computations in subtrees that won’t affect the final decision, making the search more efficient.

2. Topics: Logical Inference (Unification, Resolution, Forward Chaining, Backward Chaining, Soundness, Completeness)

The revision slides strongly emphasize:

  • “Working comfortably with logical inference: Please do practise with unification, resolution, forward chaining, backward chaining.”
  • “Past paper from May 2019: Mechanisms supporting logical inference (Slides Topic3.1, Topic3.2, Topic3.3).”
  • “Discussed entailment, the meaning of soundness and completeness for an inference procedure.”
  • “Resolution, unification, finding most general unifier, forward chaining, backward chaining.”
  • “Practice with unification: Topic3.3 slides 46-50 for some unification exercises with solutions.”

From CS2910-24.pdf (May 2024 Paper):

Question 3: Logical Inference and their Formulation

(a) Briefly explain when a substitution unifies two atomic sentences p and q;
Answer: A substitution $\theta$ unifies two atomic sentences $p$ and $q$ if applying $\theta$ to both sentences makes them identical, i.e., $p\theta = q\theta$.

(b) Find the most general unifiers of the following terms, if they exist:
i. admires(molly, X) and admires(beatrice, lilly).
Answer: No unifier exists. The constant arguments molly and beatrice in the first position are different and cannot be unified.

ii. f(g(Y), h(c,d)) and f(X, h(W,d)).
Answer: The most general unifier (MGU) is ${X/g(Y), W/c}$.

  • Unify f(g(Y), h(c,d)) and f(X, h(W,d)).
  • The function symbol f matches.
  • Unify the first arguments: g(Y) and X. This gives the substitution ${X/g(Y)}$. The second term becomes f(g(Y), h(W,d)).
  • Unify the second arguments: h(c,d) and h(W,d).
    • The function symbol h matches.
    • Unify c and W. This gives ${W/c}$. The second argument becomes h(c,d).
    • The argument d matches d.
  • Combining the substitutions results in ${X/g(Y), W/c}$.

(c) Given the modus ponens rule below
$\frac{\alpha,\alpha\rightarrow\beta}{\beta}$
write it in clausal form in order to explain how the basic resolution inference rule works for first order logic.
Answer:
Clausal Form:

  1. Premise $\alpha$: Clause 1: $\alpha$
  2. Premise $\alpha \rightarrow \beta$: Clause 2: $\neg \alpha \lor \beta$
  3. Negation of Conclusion for Refutation: Clause 3: $\neg \beta$

Resolution Explanation:
Resolution is a refutation-complete inference rule that combines two clauses containing complementary literals.

  1. Resolve Clause 1 ($\alpha$) and Clause 2 ($\neg \alpha \lor \beta$):
    • The literals $\alpha$ and $\neg \alpha$ are complementary.
    • The resolvent is formed by combining the remaining literals: $\beta$. (In FOL, a unifier would be applied here if variables were present.)
  2. Resolve the new clause ($\beta$) with Clause 3 ($\neg \beta$):
    • The literals $\beta$ and $\neg \beta$ are complementary.
    • The resolvent is the empty clause [].
      Deriving the empty clause signifies a contradiction, proving that the negated conclusion ($\neg \beta$) is false, and thus $\beta$ is true.

(d) Explain how we would apply resolution to prove the goal connected(a, X) using the following KB.
$edge(a,b).$
$edge(a,c).$
$edge(a,f(a)).$
$connected(a,X)\leftarrow edge(a,X).$
Answer:
1. Convert to Clausal Form:

  • Clause 1: edge(a,b)
  • Clause 2: edge(a,c)
  • Clause 3: edge(a,f(a))
  • Clause 4: $\neg$edge(a,X) $\lor$ connected(a,X) (from the implication)
  • Negated Goal: Clause 5: $\neg$connected(a, X’) (using a fresh variable X’)

2. Apply Resolution (Refutation):

  • Resolve Clause 5 ($\neg$connected(a, X’)) and Clause 4 ($\neg$edge(a,X) $\lor$ connected(a,X)):

    • Unify connected(a, X') with connected(a,X) using $\theta_1 = {X/X’}$.
    • Resolvent (Clause 6): $\neg$edge(a, X’)
  • Resolve Clause 6 ($\neg$edge(a, X’)) and Clause 1 (edge(a,b)):

    • Unify edge(a, X') with edge(a,b) using $\theta_2 = {X’/b}$.
    • Resolvent: [] (the empty clause).
      The derivation of the empty clause proves that the goal connected(a, X) is true. The substitutions show that connected(a,b) is one instance that makes it true. (Could also be proven with Clause 2 or 3 for X'=c or X'=f(a) respectively).

(e) Formally define what we mean when we say that an inference procedure $i$ (like resolution), deriving a specific sentence $\alpha$ from a knowledge base $KB$, is sound and complete. Briefly explain any formal symbols that you use in your definitions.
Answer:
Let $KB$ be a knowledge base (a set of sentences) and $\alpha$ be a sentence.

  • Soundness: An inference procedure $i$ is sound if and only if every sentence $\alpha$ that can be derived from $KB$ using $i$ is logically entailed by $KB$.

    • Formally: If $KB \vdash_i \alpha$, then $KB \models \alpha$.
    • $KB \vdash_i \alpha$: The inference procedure $i$ can derive $\alpha$ from $KB$.
    • $KB \models \alpha$: $KB$ logically entails $\alpha$ (i.e., in all interpretations where $KB$ is true, $\alpha$ is also true).
    • Soundness ensures that the procedure only derives correct conclusions.
  • Completeness: An inference procedure $i$ is complete if and only if every sentence $\alpha$ that is logically entailed by $KB$ can be derived from $KB$ using $i$.

    • Formally: If $KB \models \alpha$, then $KB \vdash_i \alpha$.
    • Completeness ensures that the procedure can prove everything that is logically true given the knowledge base. Resolution is refutation complete, meaning it can derive the empty clause if a set of clauses is unsatisfiable.

From CS2910-22.pdf (2022 Paper):

Question 3. (a) Recall that inference procedures are used to implement entailment.
i. Given a goal and a knowledge base, formally explain what it means for an inference procedure to be sound.
Answer: This is identical to the soundness definition in Question 3(e) from the 2024 paper, provided above.

ii. Contrast soundness with completeness by explaining the potential shortcomings of an inference procedure that is sound but not complete.
Answer:

  • Soundness guarantees that an inference procedure will only produce true conclusions if its premises are true.
  • Completeness guarantees that an inference procedure can produce all logically true conclusions from its premises.
    A procedure that is sound but not complete will never produce incorrect conclusions, but it might fail to derive some conclusions that are, in fact, logically entailed by the knowledge base. It’s a reliable but potentially limited reasoner, unable to find every valid deduction.

Question 3. (b) The resolution inference rule combines two clauses to make a new one.
i. Give the propositional version of the resolution inference rule.
Answer: Given two clauses $C_1$ and $C_2$, if $C_1$ contains a literal $L$ and $C_2$ contains its negation $\neg L$, then the resolvent is formed by taking the disjunction of all remaining literals from $C_1$ and $C_2$.
Example: $(P \lor Q)$ and $(\neg P \lor R)$ resolve to $(Q \lor R)$.

ii. Give the first order logic version of the resolution inference rule.
Answer: Given two clauses $C_1$ and $C_2$, if there is a literal $L_1 \in C_1$ and $L_2 \in C_2$ such that $L_1$ and $\neg L_2$ (or $\neg L_1$ and $L_2$) unify with a most general unifier (MGU) $\theta$, then the resolvent is formed by taking the disjunction of $(C_1 \setminus {L_1})\theta \cup (C_2 \setminus {L_2})\theta$. This means removing the unified complementary literals and applying the MGU to the rest of the clauses.

iii. Explain the key difference(s) between these two versions of the rule.
Answer: The key difference is the use of unification in the first-order logic (FOL) version. In propositional logic, literals must be exact complements (e.g., P and $\neg$P). In FOL, literals can contain variables (e.g., P(X) and $\neg$P(john)). Unification finds a substitution for these variables (e.g., ${X/john}$) that makes the literals complementary, and this substitution is applied to the resulting resolvent.

Question 3. (c) Consider the logic program P below.
at_risk(x) $\leftarrow$ expensive(x), parking(x, y), unsafe(y)
expensive(x) $\leftarrow$ race_bike(x)
parking (Trek, Lamp_post)
parking (Cervelo, Garage)
unsafe (Lamp-post)
race_bike(Cervelo)
race_bike(Trek)
commute_bike(Carrera)
Determine whether forward chaining or backward chaining is best suited to identify which bicycles are at risk of being stolen (if any). Use your chosen approach on program P to determine which bicycles are at risk. Your answer must show step-by-step how your conclusions were reached (including your substitutions).
Answer:
Backward chaining is best suited here because the goal is to find specific instances (which bicycles) that satisfy a query (at_risk(X)). Forward chaining would derive all possible facts, which might be inefficient for this type of query.

Backward Chaining Steps to find at_risk(X):

  1. Query: at_risk(X)
  2. Match with rule: at_risk(x) $\leftarrow$ expensive(x), parking(x, y), unsafe(y)
    • Subgoals: expensive(x), parking(x, y), unsafe(y). (Unifying X with x, $\theta_1 = {X/x}$)
  3. Expand expensive(x):
    • Match with rule: expensive(x) $\leftarrow$ race_bike(x)
    • New subgoal: race_bike(x).
  4. Expand race_bike(x):
    • Try race_bike(Cervelo):
      • Unify race_bike(x) with race_bike(Cervelo). Substitution: $\theta_2 = {x/Cervelo}$.
      • Remaining subgoals (after applying $\theta_2$ to original subgoals): expensive(Cervelo), parking(Cervelo, y), unsafe(y).
      • Now try to satisfy parking(Cervelo, y):
        • Match with fact: parking(Cervelo, Garage). Substitution: $\theta_3 = {y/Garage}$.
        • Remaining subgoal: unsafe(Garage).
        • Now try to satisfy unsafe(Garage):
          • No fact unsafe(Garage) exists. Path fails.
    • Backtrack and try race_bike(Trek):
      • Unify race_bike(x) with race_bike(Trek). Substitution: $\theta_4 = {x/Trek}$.
      • Remaining subgoals (after applying $\theta_4$ to original subgoals): expensive(Trek), parking(Trek, y), unsafe(y).
      • Now try to satisfy parking(Trek, y):
        • Match with fact: parking(Trek, Lamp_post). Substitution: $\theta_5 = {y/Lamp_post}$.
        • Remaining subgoal: unsafe(Lamp_post).
      • Now try to satisfy unsafe(Lamp_post):
        • Match with fact: unsafe(Lamp_post). Satisfied.
  5. Success! All subgoals satisfied. The final substitution for X (from $\theta_1$ and $\theta_4$) is Trek.

Conclusion: The bicycle at risk of being stolen is Trek.


3. Topics: Inductive Learning (Decision Trees) & Planning

The revision slides specifically mention:

  • “Past paper from May 2019: Inductive learning, decision trees (See the slides Topic6).”
  • “Past paper from May 2019: Planning in AI (See the slides Topic5) Recalled/ discussed PDDL and vacuum world example.”

From sample2.pdf (May 2019 Paper):

Question 4: This question is about logical learning and the role of knowledge in it.

(a) In the context of inductive learning explain the parts of the schema below:
$$\forall x~Goal(x)\leftrightarrow C_{j}(x)$$
Briefly explain how the schema can support learning of this type.
Answer: This schema represents concept learning within inductive learning, aiming to define a target concept.

  • $Goal(x)$: The predicate representing the target concept or property to be learned (e.g., Fit(x)).
  • $C_j(x)$: The hypothesized logical formula or set of conditions that defines Goal(x). j indicates a specific hypothesis among many candidates.
  • $\forall x$: Universal quantifier, meaning “for all instances x”.
  • $\leftrightarrow$: Logical equivalence, meaning Goal(x) is true if and only if $C_j(x)$ is true.
    This schema supports learning by guiding the search for a hypothesis $C_j(x)$ that is consistent with observed positive and negative examples of Goal(x), effectively learning a definition for the concept.

(b) Define logically the decision tree shown in Fig. 2. This tree shows how an AI program should advise whether a person is fit or not.
(Figure 2 from sample2.pdf, showing decision tree for “Fit” based on “Age”, “Eats Lots of Pizza”, “Exercises”)
Answer:
Let Fit(Person) be the outcome.

  • If Age < 30:
    • If Eats Lots of Pizza is Yes: ($Age < 30) \land$ Eats_Lots_of_Pizza(Yes) $\rightarrow$ $\neg$Fit(Person)
    • If Eats Lots of Pizza is No: ($Age < 30) \land$ Eats_Lots_of_Pizza(No) $\rightarrow$ Fit(Person)
  • If Age $\ge$ 30:
    • If Exercises is Yes: ($Age \ge 30) \land$ Exercises(Yes) $\rightarrow$ Fit(Person)
    • If Exercises is No: ($Age \ge 30) \land$ Exercises(No) $\rightarrow$ $\neg$Fit(Person)

From CS2910-24.pdf (May 2024 Paper):

Question 2: Logical Learning and the Role of Knowledge

(a) Using a generic logical schema, briefly explain the aim of inductive learning.
Answer: The aim of inductive learning is to infer a general rule or hypothesis (H) from observed examples (E) and possibly existing background knowledge (KB).

(b) Specialise the inductive learning schema you used in part 2(a) to define logically the decision tree shown in Fig. 3. This tree shows how an AI program should advise a user whether to go hiking.
(Figure 3 from CS2910-24.pdf, decision tree for “GoHiking” based on “Outlook”, “Temperature”, “Wind”)
Answer:
Let GoHiking be the outcome.

  • Outlook(Partly_Cloudy) $\rightarrow$ GoHiking
  • Outlook(Sunny) $\land$ Temperature(High) $\rightarrow$ $\neg$GoHiking
  • Outlook(Sunny) $\land$ Temperature(Normal) $\rightarrow$ GoHiking
  • Outlook(Cloudy) $\land$ Wind(Strong) $\rightarrow$ $\neg$GoHiking
  • Outlook(Cloudy) $\land$ Wind(Weak) $\rightarrow$ GoHiking

(c) Briefly explain the entailment constraint for knowledge learning and state how you would change it to support knowledge-based inductive learning.
Answer: The entailment constraint generally states that the learned hypothesis ($H$) and background knowledge ($KB$) must logically entail the observed examples ($E$), i.e., $KB \land H \models E$.
To support knowledge-based inductive learning, this constraint can be modified:

  • For noisy data, it might become probabilistic: $KB \land H$ should entail $E$ with high probability or within an error margin.
  • For concept learning, it’s often split: $KB \land H \models E_{positive}$ (all positive examples covered) and $KB \land H \not\models E_{negative}$ (no negative examples covered).
  • In theory revision, the learning process might modify $KB$ itself if it conflicts with new evidence.

From CS2910-24.pdf (May 2024 Paper):

Question 4: AI Planning

(a) Define the planning problem.
Answer: The planning problem in AI involves finding a sequence of actions that transforms an initial state of the world into a desired goal state. The solution is a plan, which is the sequence of actions that achieves the goal.

(b) Briefly explain what are the conditions that define the classical planning problem. Is the observability of the environment relevant for the classical planning problem? Justify your answer.
Answer: Classical planning is defined by conditions including:

  • Static: The world does not change independently of the agent’s actions.
  • Deterministic: Action outcomes are perfectly predictable.
  • Fully Observable: The agent has complete and accurate knowledge of the current state of the world.
  • Discrete: States and actions are symbolic and discrete.
  • Finite: Finite number of states and actions.
  • Instantaneous Actions: Actions have no duration.
  • Single Agent: Typically one agent.
  • Goal-directed: Aiming to achieve a specific goal state.

Observability is highly relevant for classical planning. Without full observability, the plan might fail due to unknown aspects of the environment, necessitating more complex planning techniques (e.g., contingent planning).

(c) Consider the state space of Vacuum World in Fig. 4 where a cleaning robot transitions between states by moving left (L), right (R) or sucking dirt (S). Formulate this problem as a classical planning problem with PDDL-like action schemas, stating any assumptions you make in your formulation. Exemplify your answer by showing how your formulation may generate a concrete plan between two states in the diagram of Fig. 4.
Answer:
Assumptions:

  1. Two locations: LocA (left) and LocB (right).
  2. Each location can be Dirty or Clean.
  3. The robot is at either LocA or LocB.
  4. Goal is to make both locations Clean.
  5. Environment is deterministic and fully observable (classical planning assumptions).

PDDL-like Formulation:

  • Predicates:

    • (robot-at ?loc): The robot is at ?loc.
    • (dirty ?loc): ?loc contains dirt.
    • (clean ?loc): ?loc is clean.
  • Actions:

    • MoveRight
      • Parameters: ?from - location, ?to - location
      • Preconditions: (robot-at ?from) (e.g., specific (robot-at LocA))
      • Effects: (del (robot-at ?from)), (add (robot-at ?to))
    • MoveLeft
      • Parameters: ?from - location, ?to - location
      • Preconditions: (robot-at ?from) (e.g., specific (robot-at LocB))
      • Effects: (del (robot-at ?from)), (add (robot-at ?to))
    • Suck
      • Parameters: ?loc - location
      • Preconditions: (robot-at ?loc), (dirty ?loc)
      • Effects: (del (dirty ?loc)), (add (clean ?loc))

Concrete Plan Generation Example:

  • Initial State: {(robot-at LocA), (dirty LocA), (dirty LocB)} (Top-left state in Fig. 4 of CS2910-24.pdf)
  • Goal State: {(clean LocA), (clean LocB)}

Plan Sequence:

  1. Action: Suck LocA
    • Preconditions check: Robot is at LocA and LocA is dirty. (Met)
    • Resulting State: {(robot-at LocA), (clean LocA), (dirty LocB)}
  2. Action: MoveRight LocA LocB
    • Preconditions check: Robot is at LocA. (Met)
    • Resulting State: {(robot-at LocB), (clean LocA), (dirty LocB)}
  3. Action: Suck LocB
    • Preconditions check: Robot is at LocB and LocB is dirty. (Met)
    • Resulting State: {(robot-at LocB), (clean LocA), (clean LocB)} (Goal achieved!)

This sequence, [Suck LocA, MoveRight LocA LocB, Suck LocB], is a valid plan to reach the goal.


CS2910 Exam topics with example questions and solutions
https://blog.pandayuyu.zone/2025/05/21/CS2910-Exam-topics-with-example-questions-and-solutions/
Author
Panda
Posted on
May 21, 2025
Licensed under