The first thing you do, is looking at the propositional subformulas of the formulas. These are ¬c & d, and d → c = ¬d | c = ¬(¬c&d). This means, that we just have one propositional signal and its negation in this formula.
Now, you try to understand what the formulas do. S2 is quite simple. It's just one outer eventually (F) operator that says: at some point we want to have ¬c&d, and one step later its negation. Most simple case: ¬c&d at the first step, d→c at the second step.
How can we satisfy that? Of course, our path at some point needs to have ¬c&d followed by d→c. After that, the path my look arbitrarily. before that, it may have d→c arbitrarily often (even never), and the ¬c&d also arbitrarily often.
S1 is maybe a bit more difficult. Let's start from the outside. We have a SB formula. SB means, that the left formula needs to hold at some point, and up to this point the formula on the right may not hold. The inner formula on the left is F(¬c&d), the formula on the right is X(¬c&d). What does that mean? Since the formula on the left needs to be satisfied eventually (due to the strongness of SB), F(¬c&d) needs to be satisfied at some point. F(¬c&d) is satisfied at least on every step (since the beginning of time) up to the first occurrence of ¬c&d. Hence, if ¬c&d ever occurs, then the left side of this SB is satisfied at the first step. This means, we only need to make sure that the right hand side is not satisfied in the first step. (If the right hand side is satisfied at any later step then the first step, this doesn't count. If the left side is ever satisfied, then it has already been satisfied in the first step, and this would have been the only chance for the right hand side to break it.) The right side says X(¬c&d). Hence, ¬c&d may just not occur in the second step in order for the whole formula to be satisfied. Thus, S2 boils down to: ¬c&d is not satisfied at the second step but it needs to be satisfied at least once. (formally, that's ¬X(¬c&d) & F(¬c&d)). Most simple case: ¬c&d at the first step, d→c at the second step.
You already see that the most simple cases to satisfy one formula already satisfies the other one. Even more: Their most simple case is the same. Hence, we need to think about other ways to satisfy precisely one of the formulas. Just think in the terms: How to we “break” one formula, how do we satisfy the other one, what can we combine?
Example 1:
a) S1 is of course broken if ¬c&d is never satisfied but that already breaks S2.
b) S1 is also broken if ¬c&d is satisfied in the second step. That doesn't necessarily contradict S2. At some time, we just need to satisfy d→c. The path would look like: first step arbitrary, second (and arbitrarily many following steps ¬c&d), then d→c for at least one step, then arbitrary.
Example 2:
a) S2 is of course broken if ¬c&d is never satisfied but that already breaks S1.
b) S2 is also broken if ¬c&d is satisfied at some point but d→c isn't afterwards. That doesn't necessarily contradict S1. We just needs to make sure that the second step doesn't satisfy ¬c&d. The path would like d→c for at least two steps, then ¬c&d forever.
The example solution in the exam is derived from Example 2b). The given path just fixed “at least two steps“ to “precisely two steps”, and picked for these two steps the assignment c=d=false, which is one of the ways to satisfy d→c. Similarly, you could generate a solution that follows Example 1b).